977 resultados para Distributed virtual machines
Resumo:
The aim of task scheduling is to minimize the makespan of applications, exploiting the best possible way to use shared resources. Applications have requirements which call for customized environments for their execution. One way to provide such environments is to use virtualization on demand. This paper presents two schedulers based on integer linear programming which schedule virtual machines (VMs) in grid resources and tasks on these VMs. The schedulers differ from previous work by the joint scheduling of tasks and VMs and by considering the impact of the available bandwidth on the quality of the schedule. Experiments show the efficacy of the schedulers in scenarios with different network configurations.
Resumo:
Cloud computing innebär användning av datorresurser som är tillgängliga via ett nätverk, oftast Internet och är ett område som har vuxit fram i snabb takt under de senaste åren. Allt fler företag migrerar hela eller delar av sin verksamhet till molnet. Sogeti i Borlänge har behov av att migrera sina utvecklingsmiljöer till en molntjänst då drift och underhåll av dessa är kostsamma och tidsödande. Som Microsoftpartners vill Sogeti använda Microsoft tjänst för cloud computing, Windows Azure, för detta syfte. Migration till molnet är ett nytt område för Sogeti och de har inga beskrivningar för hur en sådan process går till. Vårt uppdrag var att utveckla ett tillvägagångssätt för migration av en IT-lösning till molnet. En del av uppdraget blev då att kartlägga cloud computing, dess beståndsdelar samt vilka för- och nackdelar som finns, vilket har gjort att vi har fått grundläggande kunskap i ämnet. För att utveckla ett tillvägagångssätt för migration har vi utfört flera migrationer av virtuella maskiner till Windows Azure och utifrån dessa migrationer, litteraturstudier och intervjuer dragit slutsatser som mynnat ut i ett generellt tillvägagångssätt för migration till molnet. Resultatet har visat att det är svårt att göra en generell men samtidigt detaljerad beskrivning över ett tillvägagångssätt för migration, då scenariot ser olika ut beroende på vad som ska migreras och vilken typ av molntjänst som används. Vi har dock utifrån våra erfarenheter från våra migrationer, tillsammans med litteraturstudier, dokumentstudier och intervjuer lyft vår kunskap till en generell nivå. Från denna kunskap har vi sammanställt ett generellt tillvägagångssätt med större fokus på de förberedande aktiviteter som en organisation bör genomföra innan migration. Våra studier har även resulterat i en fördjupad beskrivning av cloud computing. I vår studie har vi inte sett att någon tidigare har beskrivit kritiska framgångsfaktorer i samband med cloud computing. I vårt empiriska arbete har vi dock identifierat tre kritiska framgångsfaktorer för cloud computing och i och med detta täckt upp en del av kunskapsgapet där emellan.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Distributed Virtual Environments (DVE) allow multiple users interact with a virtual world by sharing, manipulating objects. Each user has his/her own vision of this world and every changes of state of the environment are distributed among other users. Due to technological constraints, most systems for distributed virtual environments support a limited number of users. The major limitation for development of DVE scale are imposed by the communication system. As the number of users grows, the amount of bandwidth required to exchange information in real time among stations to upgrade the environment and keep it in a consistent state for all connected users. Based on this situation is proposed a framework that uses the protocol anycast in the application layer that provides a load balancing among servers in the region and aims at reducing the latency of message delivery between participants selecting the DVE server best suited for the region participant to connect to the environment
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
The technology of partial virtualization is a revolutionary approach to the world of virtualization. It lies directly in-between full system virtual machines (like QEMU or XEN) and application-related virtual machines (like the JVM or the CLR). The ViewOS project is the flagship of such technique, developed by the Virtual Square laboratory, created to provide an abstract view of the underlying system resources on a per-process basis and work against the principle of the Global View Assumption. Virtual Square provides several different methods to achieve partial virtualization within the ViewOS system, both at user and kernel levels. Each of these approaches have their own advantages and shortcomings. This paper provides an analysis of the different virtualization methods and problems related to both the generic and partial virtualization worlds. This paper is the result of an in-depth study and research for a new technology to be employed to provide partial virtualization based on ELF dynamic binaries. It starts with a mild analysis of currently available virtualization alternatives and then goes on describing the ViewOS system, highlighting its current shortcomings. The vloader project is then proposed as a possible solution to some of these inconveniences with a working proof of concept and examples to outline the potential of such new virtualization technique. By injecting specific code and libraries in the middle of the binary loading mechanism provided by the ELF standard, the vloader project can promote a streamlined and simplified approach to trace system calls. With the advantages outlined in the following paper, this method presents better performance and portability compared to the currently available ViewOS implementations. Furthermore, some of itsdisadvantages are also discussed, along with their possible solutions.
Resumo:
During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramount, with Moore’s Law being the leading factor of this trend. Today in fact an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges. As a result of the increased silicon density of modern Systems-on-a-Chip (SoC), the design space exploration needed to find the best design has exploded and hardware designers are in fact facing the problem of a huge design space. Virtual Platforms have always been used to enable hardware-software co-design, but today they are facing with the huge complexity of both hardware and software systems. In this thesis two different research works on Virtual Platforms are presented: the first one is intended for the hardware developer, to easily allow complex cycle accurate simulations of many-core SoCs. The second work exploits the parallel computing power of off-the-shelf General Purpose Graphics Processing Units (GPGPUs), with the goal of an increased simulation speed. The term Virtualization can be used in the context of many-core systems not only to refer to the aforementioned hardware emulation tools (Virtual Platforms), but also for two other main purposes: 1) to help the programmer to achieve the maximum possible performance of an application, by hiding the complexity of the underlying hardware. 2) to efficiently exploit the high parallel hardware of many-core chips in environments with multiple active Virtual Machines. This thesis is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm.
Resumo:
In this thesis, the author presents a query language for an RDF (Resource Description Framework) database and discusses its applications in the context of the HELM project (the Hypertextual Electronic Library of Mathematics). This language aims at meeting the main requirements coming from the RDF community. in particular it includes: a human readable textual syntax and a machine-processable XML (Extensible Markup Language) syntax both for queries and for query results, a rigorously exposed formal semantics, a graph-oriented RDF data access model capable of exploring an entire RDF graph (including both RDF Models and RDF Schemata), a full set of Boolean operators to compose the query constraints, fully customizable and highly structured query results having a 4-dimensional geometry, some constructions taken from ordinary programming languages that simplify the formulation of complex queries. The HELM project aims at integrating the modern tools for the automation of formal reasoning with the most recent electronic publishing technologies, in order create and maintain a hypertextual, distributed virtual library of formal mathematical knowledge. In the spirit of the Semantic Web, the documents of this library include RDF metadata describing their structure and content in a machine-understandable form. Using the author's query engine, HELM exploits this information to implement some functionalities allowing the interactive and automatic retrieval of documents on the basis of content-aware requests that take into account the mathematical nature of these documents.
Resumo:
After decades of development in programming languages and programming environments, Smalltalk is still one of few environments that provide advanced features and is still widely used in the industry. However, as Java became prevalent, the ability to call Java code from Smalltalk and vice versa becomes important. Traditional approaches to integrate the Java and Smalltalk languages are through low-level communication between separate Java and Smalltalk virtual machines. We are not aware of any attempt to execute and integrate the Java language directly in the Smalltalk environment. A direct integration allows for very tight and almost seamless integration of the languages and their objects within a single environment. Yet integration and language interoperability impose challenging issues related to method naming conventions, method overloading, exception handling and thread-locking mechanisms. In this paper we describe ways to overcome these challenges and to integrate Java into the Smalltalk environment. Using techniques described in this paper, the programmer can call Java code from Smalltalk using standard Smalltalk idioms while the semantics of each language remains preserved. We present STX:LIBJAVA - an implementation of Java virtual machine within Smalltalk/X - as a validation of our approach
Resumo:
Virtualization has become a common abstraction layer in modern data centers. By multiplexing hardware resources into multiple virtual machines (VMs) and thus enabling several operating systems to run on the same physical platform simultaneously, it can effectively reduce power consumption and building size or improve security by isolating VMs. In a virtualized system, memory resource management plays a critical role in achieving high resource utilization and performance. Insufficient memory allocation to a VM will degrade its performance dramatically. On the contrary, over-allocation causes waste of memory resources. Meanwhile, a VM’s memory demand may vary significantly. As a result, effective memory resource management calls for a dynamic memory balancer, which, ideally, can adjust memory allocation in a timely manner for each VM based on their current memory demand and thus achieve the best memory utilization and the optimal overall performance. In order to estimate the memory demand of each VM and to arbitrate possible memory resource contention, a widely proposed approach is to construct an LRU-based miss ratio curve (MRC), which provides not only the current working set size (WSS) but also the correlation between performance and the target memory allocation size. Unfortunately, the cost of constructing an MRC is nontrivial. In this dissertation, we first present a low overhead LRU-based memory demand tracking scheme, which includes three orthogonal optimizations: AVL-based LRU organization, dynamic hot set sizing and intermittent memory tracking. Our evaluation results show that, for the whole SPEC CPU 2006 benchmark suite, after applying the three optimizing techniques, the mean overhead of MRC construction is lowered from 173% to only 2%. Based on current WSS, we then predict its trend in the near future and take different strategies for different prediction results. When there is a sufficient amount of physical memory on the host, it locally balances its memory resource for the VMs. Once the local memory resource is insufficient and the memory pressure is predicted to sustain for a sufficiently long time, a relatively expensive solution, VM live migration, is used to move one or more VMs from the hot host to other host(s). Finally, for transient memory pressure, a remote cache is used to alleviate the temporary performance penalty. Our experimental results show that this design achieves 49% center-wide speedup.
Resumo:
Virtual machines emulating hardware devices are generally implemented in low-level languages and using a low-level style for performance reasons. This trend results in largely difficult to understand, difficult to extend and unmaintainable systems. As new general techniques for virtual machines arise, it gets harder to incorporate or test these techniques because of early design and optimization decisions. In this paper we show how such decisions can be postponed to later phases by separating virtual machine implementation issues from the high-level machine-specific model. We construct compact models of whole-system VMs in a high-level language, which exclude all low-level implementation details. We use the pluggable translation toolchain PyPy to translate those models to executables. During the translation process, the toolchain reintroduces the VM implementation and optimization details for specific target platforms. As a case study we implement an executable model of a hardware gaming device. We show that our approach to VM building increases understandability, maintainability and extendability while preserving performance.
Resumo:
As object-oriented languages are extended with novel modularization mechanisms, better underlying models are required to implement these high-level features. This paper describes CELL, a language model that builds on delegation-based chains of object fragments. Composition of groups of cells is used: 1) to represent objects, 2) to realize various forms of method lookup, and 3) to keep track of method references. A running prototype of CELL is provided and used to realize the basic kernel of a Smalltalk system. The paper shows, using several examples, how higher-level features such as traits can be supported by the lower-level model.
Resumo:
Virtual machines (VMs) emulating hardware devices are generally implemented in low-level languages for performance reasons. This results in unmaintainable systems that are difficult to understand. In this paper we report on our experience using the PyPy toolchain to improve the portability and reduce the complexity of whole-system VM implementations. As a case study we implement a VM prototype for a Nintendo Game Boy, called PyGirl, in which the high-level model is separated from low-level VM implementation issues. We shed light on the process of refactoring from a low-level VM implementation in Java to a high-level model in RPython. We show that our whole-system VM written with PyPy is significantly less complex than standard implementations, without substantial loss in performance.