977 resultados para Distributed virtual machines


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Para empezar, se ha hecho un análisis de las diferentes posibilidades que se podían implementar para poder conseguir el objetivo del trabajo. El resultado final debe ser, disponer de máquinas para que el sistema operativo fuese independiente del hardware que se tiene instalado en él . Para ello, se decide montar un sistema operativo de base en todos los equipos del laboratorio, que tenga las necesidades mínimas que se necesitan, las cuales son una interfaz gráfica y conexión de red. Hay que intentar reducir el consumo de recursos al máximo con este sistema operativo mínimo para que el rendimiento de las máquinas sea lo más fluido posible para los usuarios. El sistema elegido fue Linux con su distribución Ubuntu [ubu, http] con los módulos mínimos que permita funcionar el software necesario. Una vez se instala el sistema operativo anfitrión, se instala el escritorio Xfce [ubu2, http], que es el más ligero de Ubuntu, pero que proporciona buen rendimiento. Después, se procedió a instalar un software de virtualización en cada equipo. En este caso se decidió, por las buenas prestaciones que ofrecía, que fuera VirtualBox [vir2,http] de Oracle. Sobre éste software se crean tantas máquinas virtuales (con sistema operativo Windows) como asignaturas diferentes se cursan en el laboratorio donde se trabaje. Con esto, se consigue que al arrancar el programa los alumnos pudieran escoger qué máquina arrancar y lo que es más importante, se permite realizar cualquier cambio en el hardware (exceptuando el disco duro porque borraría todo lo que se tuviera guardado). Además de no tener que volver a reinstalar el sistema operativo nuevamente, se consigue la abstracción del software y hardware. También se decide que, para tener un respaldo de las máquinas virtuales que se tengan creadas en VirtualBox, se utiliza un servidor NAS. Uno de los motivos de utilizar dicho servidor fue por aprovechar una infraestructura ya creada. Un servidor NAS da la posibilidad de recuperar cualquier archivo (máquina virtual) cuando haga falta porque haya alguna máquina virtual corrupta en algún equipo, o en varios. Este tipo de servidor tiene la gran ventaja de ser multicast, es decir, permite solicitudes simultáneas. ABSTRACT For starters, there has been an analysis of the different possibilities that could be implemented to achieve the objective of the work. This objective was to have machines for the operating system to be independent of the hardware we have installed on it. Therefore, we decided to create an operating system based on all computers in the laboratory, taking the minimum needs we need. This is a graphical interface and network connection. We must try to reduce the consumption of resources to the maximum for the performance of the machines is as fluid as possible for users. The system was chosen with its Ubuntu Linux distribution with minimum modules that allow us to run software that is necessary for us. Once the base is installed, we install the Xfce desktop, which is the lightest of Ubuntu, but which provided good performance. Then we proceeded to install a virtualization software on each computer. In this case we decided, for good performance that gave us, it was Oracle VirtualBox. About this software create many virtual machines (Windows operating system) as different subjects are studied in the laboratory where we are. With that, we got it at program startup students could choose which machine start and what is more important, allowed us to make any changes to the hardware (except the hard drive because it would erase all we have). Besides not having to reinstall the operating system again, we get the software and hardware abstraction. We also decided that in order to have a backup of our virtual machines that we created in VirtualBox, we use a NAS server. One reason to use that server was to leverage their existing network infrastructure. A NAS server gives us the ability to retrieve any file (image) when we do need because there is some corrupt virtual machine in a team, or several. This is possible because this type of server allows multicast connection.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dynamic binary translation is the process of translating, modifying and rewriting executable (binary) code from one machine to another at run-time. This process of low-level re-engineering consists of a reverse engineering phase followed by a forward engineering phase. UQDBT, the University of Queensland Dynamic Binary Translator, is a machine-adaptable translator. Adaptability is provided through the specification of properties of machines and their instruction sets, allowing the support of different pairs of source and target machines. Most binary translators are closely bound to a pair of machines, making analyses and code hard to reuse. Like most virtual machines, UQDBT performs generic optimizations that apply to a variety of machines. Frequently executed code is translated to native code by the use of edge weight instrumentation, which makes UQDBT converge more quickly than systems based on instruction speculation. In this paper, we describe the architecture and run-time feedback optimizations performed by the UQDBT system, and provide results obtained in the x86 and SPARC® platforms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work contributes to the development of search engines that self-adapt their size in response to fluctuations in workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computational resources to or from the engine. In this paper, we focus on the problem of regrouping the metric-space search index when the number of virtual machines used to run the search engine is modified to reflect changes in workload. We propose an algorithm for incrementally adjusting the index to fit the varying number of virtual machines. We tested its performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud, while calibrating the results to compensate for the performance fluctuations of the platform. Our experiments show that, when compared with computing the index from scratch, the incremental algorithm speeds up the index computation 2–10 times while maintaining a similar search performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modern data centers host hundreds of thousands of servers to achieve economies of scale. Such a huge number of servers create challenges for the data center network (DCN) to provide proportionally large bandwidth. In addition, the deployment of virtual machines (VMs) in data centers raises the requirements for efficient resource allocation and find-grained resource sharing. Further, the large number of servers and switches in the data center consume significant amounts of energy. Even though servers become more energy efficient with various energy saving techniques, DCN still accounts for 20% to 50% of the energy consumed by the entire data center. The objective of this dissertation is to enhance DCN performance as well as its energy efficiency by conducting optimizations on both host and network sides. First, as the DCN demands huge bisection bandwidth to interconnect all the servers, we propose a parallel packet switch (PPS) architecture that directly processes variable length packets without segmentation-and-reassembly (SAR). The proposed PPS achieves large bandwidth by combining switching capacities of multiple fabrics, and it further improves the switch throughput by avoiding padding bits in SAR. Second, since certain resource demands of the VM are bursty and demonstrate stochastic nature, to satisfy both deterministic and stochastic demands in VM placement, we propose the Max-Min Multidimensional Stochastic Bin Packing (M3SBP) algorithm. M3SBP calculates an equivalent deterministic value for the stochastic demands, and maximizes the minimum resource utilization ratio of each server. Third, to provide necessary traffic isolation for VMs that share the same physical network adapter, we propose the Flow-level Bandwidth Provisioning (FBP) algorithm. By reducing the flow scheduling problem to multiple stages of packet queuing problems, FBP guarantees the provisioned bandwidth and delay performance for each flow. Finally, while DCNs are typically provisioned with full bisection bandwidth, DCN traffic demonstrates fluctuating patterns, we propose a joint host-network optimization scheme to enhance the energy efficiency of DCNs during off-peak traffic hours. The proposed scheme utilizes a unified representation method that converts the VM placement problem to a routing problem and employs depth-first and best-fit search to find efficient paths for flows.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity.^ We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. ^ This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cloud Computing is a paradigm that enables the access, in a simple and pervasive way, through the network, to shared and configurable computing resources. Such resources can be offered on demand to users in a pay-per-use model. With the advance of this paradigm, a single service offered by a cloud platform might not be enough to meet all the requirements of clients. Ergo, it is needed to compose services provided by different cloud platforms. However, current cloud platforms are not implemented using common standards, each one has its own APIs and development tools, which is a barrier for composing different services. In this context, the Cloud Integrator, a service-oriented middleware platform, provides an environment to facilitate the development and execution of multi-cloud applications. The applications are compositions of services, from different cloud platforms and, represented by abstract workflows. However, Cloud Integrator has some limitations, such as: (i) applications are locally executed; (ii) users cannot specify the application in terms of its inputs and outputs, and; (iii) experienced users cannot directly determine the concrete Web services that will perform the workflow. In order to deal with such limitations, this work proposes Cloud Stratus, a middleware platform that extends Cloud Integrator and offers different ways to specify an application: as an abstract workflow or a complete/partial execution flow. The platform enables the application deployment in cloud virtual machines, so that several users can access it through the Internet. It also supports the access and management of virtual machines in different cloud platforms and provides services monitoring mechanisms and assessment of QoS parameters. Cloud Stratus was validated through a case study that consists of an application that uses different services provided by different cloud platforms. Cloud Stratus was also evaluated through computing experiments that analyze the performance of its processes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

How can applications be deployed on the cloud to achieve maximum performance? This question is challenging to address with the availability of a wide variety of cloud Virtual Machines (VMs) with different performance capabilities. The research reported in this paper addresses the above question by proposing a six step benchmarking methodology in which a user provides a set of weights that indicate how important memory, local communication, computation and storage related operations are to an application. The user can either provide a set of four abstract weights or eight fine grain weights based on the knowledge of the application. The weights along with benchmarking data collected from the cloud are used to generate a set of two rankings - one based only on the performance of the VMs and the other takes both performance and costs into account. The rankings are validated on three case study applications using two validation techniques. The case studies on a set of experimental VMs highlight that maximum performance can be achieved by the three top ranked VMs and maximum performance in a cost-effective manner is achieved by at least one of the top three ranked VMs produced by the methodology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work explores the development of MemTri. A memory forensics triage tool that can assess the likelihood of criminal activity in a memory image, based on evidence data artefacts generated by several applications. Fictitious illegal suspect activity scenarios were performed on virtual machines to generate 60 test memory images for input into MemTri. Four categories of applications (i.e. Internet Browsers, Instant Messengers, FTP Client and Document Processors) are examined for data artefacts located through the use of regular expressions. These identified data artefacts are then analysed using a Bayesian Network, to assess the likelihood that a seized memory image contained evidence of illegal activity. Currently, MemTri is under development and this paper introduces only the basic concept as well as the components that the application is built on. A complete description of MemTri coupled with extensive experimental results is expected to be published in the first semester of 2017.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The growing demand for large-scale virtualization environments, such as the ones used in cloud computing, has led to a need for efficient management of computing resources. RAM memory is the one of the most required resources in these environments, and is usually the main factor limiting the number of virtual machines that can run on the physical host. Recently, hypervisors have brought mechanisms for transparent memory sharing between virtual machines in order to reduce the total demand for system memory. These mechanisms “merge” similar pages detected in multiple virtual machines into the same physical memory, using a copy-on-write mechanism in a manner that is transparent to the guest systems. The objective of this study is to present an overview of these mechanisms and also evaluate their performance and effectiveness. The results of two popular hypervisors (VMware and KVM) using different guest operating systems (Linux and Windows) and different workloads (synthetic and real) are presented herein. The results show significant performance differences between hypervisors according to the guest system workloads and execution time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modern data centers host hundreds of thousands of servers to achieve economies of scale. Such a huge number of servers create challenges for the data center network (DCN) to provide proportionally large bandwidth. In addition, the deployment of virtual machines (VMs) in data centers raises the requirements for efficient resource allocation and find-grained resource sharing. Further, the large number of servers and switches in the data center consume significant amounts of energy. Even though servers become more energy efficient with various energy saving techniques, DCN still accounts for 20% to 50% of the energy consumed by the entire data center. The objective of this dissertation is to enhance DCN performance as well as its energy efficiency by conducting optimizations on both host and network sides. First, as the DCN demands huge bisection bandwidth to interconnect all the servers, we propose a parallel packet switch (PPS) architecture that directly processes variable length packets without segmentation-and-reassembly (SAR). The proposed PPS achieves large bandwidth by combining switching capacities of multiple fabrics, and it further improves the switch throughput by avoiding padding bits in SAR. Second, since certain resource demands of the VM are bursty and demonstrate stochastic nature, to satisfy both deterministic and stochastic demands in VM placement, we propose the Max-Min Multidimensional Stochastic Bin Packing (M3SBP) algorithm. M3SBP calculates an equivalent deterministic value for the stochastic demands, and maximizes the minimum resource utilization ratio of each server. Third, to provide necessary traffic isolation for VMs that share the same physical network adapter, we propose the Flow-level Bandwidth Provisioning (FBP) algorithm. By reducing the flow scheduling problem to multiple stages of packet queuing problems, FBP guarantees the provisioned bandwidth and delay performance for each flow. Finally, while DCNs are typically provisioned with full bisection bandwidth, DCN traffic demonstrates fluctuating patterns, we propose a joint host-network optimization scheme to enhance the energy efficiency of DCNs during off-peak traffic hours. The proposed scheme utilizes a unified representation method that converts the VM placement problem to a routing problem and employs depth-first and best-fit search to find efficient paths for flows.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O presente trabalho propõe-se estudar a execução "cruzada” de produtos/aplicações, particularmente de aplicações num sistema para o qual não foram concebidas. Em concreto, pretende-se analisar a execução de programas nativos do Windows em ambiente Linux e vice-versa. Na exploração desta tese foi seleccionado um conjunto representativo de diferentes aplicações, desde produtos genéricos como o Microsoft Office, soluções ERP (Enterprise Resource Planning) e software mais utilizado em ambientes académicos e científicos. Para a execução de aplicações-Windows no Linux utilizaram-se essencialmente dois tipos de ferramentas: a camada de tradução Wine (capaz de executar programas nativos do Windows) e máquinas virtuais, como a VirtualBox e VMWare Player. Para a componente inversa deste trabalho (a execução de aplicações Linux em Windows), fez-se uso essencialmente dessas mesmas máquinas virtuais contendo embora (tendo­lhes sido adicionadas) as distribuições Linux, o Ubuntu 10.04 e OpenSUSE 11.3. ABSTRACT: This work intent to examine the "crossed “execution of products / applications, particularly in an operative system for which such products and applications were not designed. More specially, the purpose of this work is to analyze the performance of native Windows programs under within Linux and vice versa. Throughout the development of this thesis we selected a representative set of different applications, from generic products such 88 Microsoft Office, ERP (Enterprise Resource Planning) and software mainly used in academic and scientific environments. For the execution of Windows applications in Linux, we used essentially two types of tools: the translation layer Wine (capable of running native Windows programs) and virtual machines, such 88 VirtualBox and VMWare Player. For the reverse case, running Linux applications in Windows, the main solution was the use of virtual machines, added with Linux distributions, Ubuntu 10.04 and OpenSUSE 11.3.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A global italian pharmaceutical company has to provide two work environments that favor different needs. The environments will allow to develop solutions in a controlled, secure and at the same time in an independent manner on a state-of-the-art enterprise cloud platform. The need of developing two different environments is dictated by the needs of the working units. Indeed, the first environment is designed to facilitate the creation of application related to genomics, therefore, designed more for data-scientists. This environment is capable of consuming, producing, retrieving and incorporating data, furthermore, will support the most used programming languages for genomic applications (e.g., Python, R). The proposal was to obtain a pool of ready-togo Virtual Machines with different architectures to provide best performance based on the job that needs to be carried out. The second environment has more of a traditional trait, to obtain, via ETL (Extract-Transform-Load) process, a global datamodel, resembling a classical relational structure. It will provide major BI operations (e.g., analytics, performance measure, reports, etc.) that can be leveraged both for application analysis or for internal usage. Since, both architectures will maintain large amounts of data regarding not only pharmaceutical informations but also internal company informations, it would be possible to digest the data by reporting/ analytics tools and also apply data-mining, machine learning technologies to exploit intrinsic informations. The thesis work will introduce, proposals, implementations, descriptions of used technologies/platforms and future works of the above discussed environments.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Sustainable development concerns made renewable energy sources to be increasingly used for electricity distributed generation. However, this is mainly due to incentives or mandatory targets determined by energy policies as in European Union. Assuring a sustainable future requires distributed generation to be able to participate in competitive electricity markets. To get more negotiation power in the market and to get advantages of scale economy, distributed generators can be aggregated giving place to a new concept: the Virtual Power Producer (VPP). VPPs are multi-technology and multisite heterogeneous entities that should adopt organization and management methodologies so that they can make distributed generation a really profitable activity, able to participate in the market. This paper presents ViProd, a simulation tool that allows simulating VPPs operation, in the context of MASCEM, a multi-agent based eletricity market simulator.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The process of resources systems selection takes an important part in Distributed/Agile/Virtual Enterprises (D/A/V Es) integration. However, the resources systems selection is still a difficult matter to solve in a D/A/VE, as it is pointed out in this paper. Globally, we can say that the selection problem has been equated from different aspects, originating different kinds of models/algorithms to solve it. In order to assist the development of a web prototype tool (broker tool), intelligent and flexible, that integrates all the selection model activities and tools, and with the capacity to adequate to each D/A/V E project or instance (this is the major goal of our final project), we intend in this paper to show: a formulation of a kind of resources selection problem and the limitations of the algorithms proposed to solve it. We formulate a particular case of the problem as an integer programming, which is solved using simplex and branch and bound algorithms, and identify their performance limitations (in terms of processing time) based on simulation results. These limitations depend on the number of processing tasks and on the number of pre-selected resources per processing tasks, defining the domain of applicability of the algorithms for the problem studied. The limitations detected open the necessity of the application of other kind of algorithms (approximate solution algorithms) outside the domain of applicability founded for the algorithms simulated. However, for a broker tool it is very important the knowledge of algorithms limitations, in order to, based on problem features, develop and select the most suitable algorithm that guarantees a good performance.