821 resultados para cloud computing, hypervisor, virtualizzazione, live migration, infrastructure as a service
Resumo:
This article will address the main technical aspects that facilitate the use and growth of computer technology in the cloud, which go hand in hand with the emergence of more and better services on the Internet and technological development of the broadband. Finally, we know what is the impact that the cloud computing technologies in the automation of information units.
Resumo:
Introducción: Los softwares dietoterapéuticos constituyen actualmente una herramienta básica en el tratamiento dietético de pacientes, ya sea desde un punto de vista fisiológico y/o patológico. Las nuevas tecnologías y la investigación en este sentido, han favorecido la aparición de nuevas aplicaciones de gestión dietético-nutricional que facilitan la gestión de la empresa dietoterapéutica. Objetivos: Estudiar comparativamente las principales aplicaciones dietoterapéuticas existentes en el mercado para dar criterio a los usuarios profesionales de la dietética y nutrición en la selección de una de las principales herramientas para éstos. Resultados: Desde nuestro punto de vista, dietopro. com resulta, junto con otras de las aplicaciones dietoterapéuticas analizadas, una de las más completas para la gestión de la clínica nutricional. Conclusión: En función de la necesidad del usuario, éste dispone de diferentes softwares dietéticos donde elegir. Se concluye que la selección de una u otra, depende de las necesidades del profesional.
Resumo:
This Policy Contribution assesses the broad obstacles hampering ICT-led growth in Europe and identifies the main areas in which policy could unlock the greatest value. We review estimates of the value that could be generated through take-up of various technologies and carry out a broad matching with policy areas. According to the literature survey and the collected estimates, the areas in which the right policies could unlock the greatest ICT-led growth are product and labour market regulations and the European Single Market. These areas should be reformed to make European markets more flexible and competitive. This would promote wider adoption of modern data-driven organisational and management practices thereby helping to close the productivity gap between the United States and the European Union. Gains could also be made in the areas of privacy, data security, intellectual property and liability pertaining to the digital economy, especially cloud computing, and next generation network infrastructure investment. Standardisation and spectrum allocation issues are found to be important, though to a lesser degree. Strong complementarities between the analysed technologies suggest, however, that policymakers need to deal with all of the identified obstacles in order to fully realise the potential of ICT to spur long-term growth beyond the partial gains that we report.
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Resumo:
Nella tesi vengono descritte le caratteristiche principali del linguaggio di programmazione service-oriented Jolie, analizzandone ampiamente la sintassi e proponendo esempi di utilizzo degli operatori e dei costrutti. Viene fatta una panoramica di SOC, SOA, Web Services, Cloud Computing, Orchestrazione, Coreografia, Deployment e Behaviour, gli ultimi due analizzati in diversi capitoli. La tesi si conclude con un esempio di conversione di servizi WSDL in Jolie, producendo un esempio di utilizzo del Web Service convertito. Nel documento vengono accennati i progressi storici del linguaggio ed i loro sviluppatori, nonché le API fornite dal linguaggio.
Resumo:
The Future Communication Architecture for Mobile Cloud Services: Mobile Cloud Networking (MCN) is a EU FP7 Large-scale Integrating Project (IP) funded by the European Commission. MCN project was launched in November 2012 for the period of 36 month. In total top-tier 19 partners from industry and academia commit to jointly establish the vision of Mobile Cloud Networking, to develop a fully cloud-based mobile communication and application platform.
Resumo:
Questa tesi si pone l’obiettivo di effettuare un’analisi aggiornata sulla recente evoluzione del Cloud Computing e dei nuovi modelli architetturali a sostegno della continua crescita di richiesta di risorse di computazione, di storage e di rete all'interno dei data center, per poi dedicarsi ad una fase sperimentale di migrazioni live singole e concorrenti di macchine virtuali, studiandone le prestazioni a livello di risorse applicative e di rete all’interno della piattaforma open source di virtualizzazione QEMU-KVM, oggi alla base di sistemi cloud-based come Openstack. Nel primo capitolo, viene effettuato uno studio dello stato dell’arte del Cloud Computing, dei suoi attuali limiti e delle prospettive offerte da un modello di Cloud Federation nel futuro immediato. Nel secondo capitolo vengono discusse nel dettaglio le tecniche di live migration, di recente riferimento per la comunità scientifica internazionale e le possibili ottimizzazioni in scenari inter e intra data center, con l’intento di definire la base teorica per lo studio approfondito dell’implementazione effettiva del processo di migrazione su piattaforma QEMU-KVM, che viene affrontato nel terzo capitolo. In particolare, in quest’ultimo sono descritti i principi architetturali e di funzionamento dell'hypervisor e viene definito il modello di progettazione e l’algoritmo alla base del processo di migrazione. Nel quarto capitolo, infine, si presenta il lavoro svolto, le scelte configurative e progettuali per la creazione di un ambiente di testbed adatto allo studio di sessioni di live migration concorrenti e vengono discussi i risultati delle misure di performance e del comportamento del sistema, tramite le sperimentazioni effettuate.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational resources. Grid enables access to the resources but it does not guarantee any quality of service. Moreover, Grid does not provide performance isolation; job of one user can influence the performance of other user’s job. The other problem with Grid is that the users of Grid belong to scientific community and the jobs require specific and customized software environment. Providing the perfect environment to the user is very difficult in Grid for its dispersed and heterogeneous nature. Though, Cloud computing provide full customization and control, but there is no simple procedure available to submit user jobs as in Grid. The Grid computing can provide customized resources and performance to the user using virtualization. A virtual machine can join the Grid as an execution node. The virtual machine can also be submitted as a job with user jobs inside. Where the first method gives quality of service and performance isolation, the second method also provides customization and administration in addition. In this thesis, a solution is proposed to enable virtual machine reuse which will provide performance isolation with customization and administration. The same virtual machine can be used for several jobs. In the proposed solution customized virtual machines join the Grid pool on user request. Proposed solution describes two scenarios to achieve this goal. In first scenario, user submits their customized virtual machine as a job. The virtual machine joins the Grid pool when it is powered on. In the second scenario, user customized virtual machines are preconfigured in the execution system. These virtual machines join the Grid pool on user request. Condor and VMware server is used to deploy and test the scenarios. Condor supports virtual machine jobs. The scenario 1 is deployed using Condor VM universe. The second scenario uses VMware-VIX API for scripting powering on and powering off of the remote virtual machines. The experimental results shows that as scenario 2 does not need to transfer the virtual machine image, the virtual machine image becomes live on pool more faster. In scenario 1, the virtual machine runs as a condor job, so it easy to administrate the virtual machine. The only pitfall in scenario 1 is the network traffic.
Resumo:
Millions of enterprises move their applications to a cloud every year. According to Forrester Research “the global cloud computing market will grow from a $40.7 billion in 2011 to $241 billion in 2020”. Due to increased interests and demand broad range of providers and solutions have appeared in the market. It is vital to be able to predict possible problems correctly and to classify and mitigate risks associated with the migration process. The study will show the main criteria that should be taken into consideration while making decision of moving enterprise applications to the cloud and choosing appropriate vendor. The main goal of the research is to identify main problems during the migration to a cloud and propose a solution for their prevention and mitigation of consequences in case of occurrence. The research provides an overview of existing cloud solutions and deployment models for enterprise applications. It identifies decision drivers of an applications migration to a cloud and potential risks and benefits associated with this. Finally, the best practices for the successful enterprise-to-cloud migration based on the case studies analysis are formulated.
Resumo:
Cloud computing, despite its success and promises, presents issues for businesses migrating their legacy applications to cloud. In this research legacy-to-cloud migration issues are reviewed based on literature findings and an experience report. Solutions are applied to Tieto Open Application Suite (TOAS) software development platform running on cloud infrastructure. It is observed that the migration strategy heavily affects the migration approach. For TOAS a strategy of redesigning the applications for cloud is suggested. Common migration-driven application level modifications include adaptation to service-oriented architecture, load balancing, and runtime and technology changes. A cloud platform such as TOAS might introduce additional needs. Decision making on migration strategy is found to be an issue to be solved case by case. Use of assistive decision making tools is suggested.
Resumo:
Nella fisica delle particelle, onde poter effettuare analisi dati, è necessario disporre di una grande capacità di calcolo e di storage. LHC Computing Grid è una infrastruttura di calcolo su scala globale e al tempo stesso un insieme di servizi, sviluppati da una grande comunità di fisici e informatici, distribuita in centri di calcolo sparsi in tutto il mondo. Questa infrastruttura ha dimostrato il suo valore per quanto riguarda l'analisi dei dati raccolti durante il Run-1 di LHC, svolgendo un ruolo fondamentale nella scoperta del bosone di Higgs. Oggi il Cloud computing sta emergendo come un nuovo paradigma di calcolo per accedere a grandi quantità di risorse condivise da numerose comunità scientifiche. Date le specifiche tecniche necessarie per il Run-2 (e successivi) di LHC, la comunità scientifica è interessata a contribuire allo sviluppo di tecnologie Cloud e verificare se queste possano fornire un approccio complementare, oppure anche costituire una valida alternativa, alle soluzioni tecnologiche esistenti. Lo scopo di questa tesi è di testare un'infrastruttura Cloud e confrontare le sue prestazioni alla LHC Computing Grid. Il Capitolo 1 contiene un resoconto generale del Modello Standard. Nel Capitolo 2 si descrive l'acceleratore LHC e gli esperimenti che operano a tale acceleratore, con particolare attenzione all’esperimento CMS. Nel Capitolo 3 viene trattato il Computing nella fisica delle alte energie e vengono esaminati i paradigmi Grid e Cloud. Il Capitolo 4, ultimo del presente elaborato, riporta i risultati del mio lavoro inerente l'analisi comparata delle prestazioni di Grid e Cloud.
Resumo:
Nowadays, data handling and data analysis in High Energy Physics requires a vast amount of computational power and storage. In particular, the world-wide LHC Com- puting Grid (LCG), an infrastructure and pool of services developed and deployed by a ample community of physicists and computer scientists, has demonstrated to be a game changer in the efficiency of data analyses during Run-I at the LHC, playing a crucial role in the Higgs boson discovery. Recently, the Cloud computing paradigm is emerging and reaching a considerable adoption level by many different scientific organizations and not only. Cloud allows to access and utilize not-owned large computing resources shared among many scientific communities. Considering the challenging requirements of LHC physics in Run-II and beyond, the LHC computing community is interested in exploring Clouds and see whether they can provide a complementary approach - or even a valid alternative - to the existing technological solutions based on Grid. In the LHC community, several experiments have been adopting Cloud approaches, and in particular the experience of the CMS experiment is of relevance to this thesis. The LHC Run-II has just started, and Cloud-based solutions are already in production for CMS. However, other approaches of Cloud usage are being thought of and are at the prototype level, as the work done in this thesis. This effort is of paramount importance to be able to equip CMS with the capability to elastically and flexibly access and utilize the computing resources needed to face the challenges of Run-III and Run-IV. The main purpose of this thesis is to present forefront Cloud approaches that allow the CMS experiment to extend to on-demand resources dynamically allocated as needed. Moreover, a direct access to Cloud resources is presented as suitable use case to face up with the CMS experiment needs. Chapter 1 presents an overview of High Energy Physics at the LHC and of the CMS experience in Run-I, as well as preparation for Run-II. Chapter 2 describes the current CMS Computing Model, and Chapter 3 provides Cloud approaches pursued and used within the CMS Collaboration. Chapter 4 and Chapter 5 discuss the original and forefront work done in this thesis to develop and test working prototypes of elastic extensions of CMS computing resources on Clouds, and HEP Computing “as a Service”. The impact of such work on a benchmark CMS physics use-cases is also demonstrated.
Resumo:
Um das principais características da tecnologia de virtualização é a Live Migration, que permite que máquinas virtuais sejam movimentadas entre máquinas físicas sem a interrupção da execução. Esta característica habilita a implementação de políticas mais sofisticadas dentro de um ambiente de computação na nuvem, como a otimização de uso de energia elétrica e recursos computacionais. Entretanto, a Live Migration pode impor severa degradação de desempenho nas aplicações das máquinas virtuais e causar diversos impactos na infraestrutura dos provedores de serviço, como congestionamento de rede e máquinas virtuais co-existentes nas máquinas físicas. Diferente de diversos estudos, este estudo considera a carga de trabalho da máquina virtual um importante fator e argumenta que escolhendo o momento adequado para a migração da máquina virtual pode-se reduzir as penalidades impostas pela Live Migration. Este trabalho introduz a Application-aware Live Migration (ALMA), que intercepta as submissões de Live Migration e, baseado na carga de trabalho da aplicação, adia a migração para um momento mais favorável. Os experimentos conduzidos neste trabalho mostraram que a arquitetura reduziu em até 74% o tempo das migrações para os experimentos com benchmarks e em até 67% os experimentos com carga de trabalho real. A transferência de dados causada pela Live Migration foi reduzida em até 62%. Além disso, o presente introduz um modelo que faz a predição do custo da Live Migration para a carga de trabalho e também um algoritmo de migração que não é sensível à utilização de memória da máquina virtual.