812 resultados para Virtualization Techniques


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the work in progress of an on-demand software deployment system based on application virtualization concepts which eliminates the need of software installation and configuration on each computer. Some mechanisms were created, such as mapping of utilization of resources by the application to improve the software distribution and startup; a virtualization middleware which give all resources needed for the software execution; an asynchronous P2P transport used to optimizing distribution on the network; and off-line support where the user can execute the application even when the server is not available or when is out of the network. © Springer-Verlag Berlin Heidelberg 2010.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramount, with Moore’s Law being the leading factor of this trend. Today in fact an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges. As a result of the increased silicon density of modern Systems-on-a-Chip (SoC), the design space exploration needed to find the best design has exploded and hardware designers are in fact facing the problem of a huge design space. Virtual Platforms have always been used to enable hardware-software co-design, but today they are facing with the huge complexity of both hardware and software systems. In this thesis two different research works on Virtual Platforms are presented: the first one is intended for the hardware developer, to easily allow complex cycle accurate simulations of many-core SoCs. The second work exploits the parallel computing power of off-the-shelf General Purpose Graphics Processing Units (GPGPUs), with the goal of an increased simulation speed. The term Virtualization can be used in the context of many-core systems not only to refer to the aforementioned hardware emulation tools (Virtual Platforms), but also for two other main purposes: 1) to help the programmer to achieve the maximum possible performance of an application, by hiding the complexity of the underlying hardware. 2) to efficiently exploit the high parallel hardware of many-core chips in environments with multiple active Virtual Machines. This thesis is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Modern embedded applications typically integrate a multitude of functionalities with potentially different criticality levels into a single system. Without appropriate preconditions, the integration of mixed-criticality subsystems can lead to a significant and potentially unacceptable increase of engineering and certification costs. A promising solution is to incorporate mechanisms that establish multiple partitions with strict temporal and spatial separation between the individual partitions. In this approach, subsystems with different levels of criticality can be placed in different partitions and can be verified and validated in isolation. The MultiPARTES FP7 project aims at supporting mixed- criticality integration for embedded systems based on virtualization techniques for heterogeneous multicore processors. A major outcome of the project is the MultiPARTES XtratuM, an open source hypervisor designed as a generic virtualization layer for heterogeneous multicore. MultiPARTES evaluates the developed technology through selected use cases from the offshore wind power, space, visual surveillance, and automotive domains. The impact of MultiPARTES on the targeted domains will be also discussed. In a number of ongoing research initiatives (e.g., RECOMP, ARAMIS, MultiPARTES, CERTAINTY) mixed-criticality integration is considered in multicore processors. Key challenges are the combination of software virtualization and hardware segregation and the extension of partitioning mechanisms to jointly address significant non-functional requirements (e.g., time, energy and power budgets, adaptivity, reliability, safety, security, volume, weight, etc.) along with development and certification methodology.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Virtualization techniques have received increased attention in the field of embedded real-time systems. Such techniques provide a set of virtual machines that run on a single hardware platform, thus allowing several application programs to be executed as though they were running on separate machines, with isolated memory spaces and a fraction of the real processor time available to each of them.This papers deals with some problems that arise when implementing real-time systems written in Ada on a virtual machine. The effects of virtualization on the performance of the Ada real-time services are analysed, and requirements for the virtualization layer are derived. Virtual-machine time services are also defined in order to properly support Ada real-time applications. The implementation of the ORK+ kernel on the XtratuM supervisor is used as an example.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To simplify computer management, various administration systems based on wired connections adopt advanced techniques to manage software configuration. Nevertheless, the strong relation between hardware and software makes for an individualism of that management, besides penalizing computational mobility and ubiquity. All these issues lead to degradation of scalability, flexibility and the facility to install and maintain distributed applications. This article presents an environment for centralized wireless communication network management, named WSE-OS (Wireless Sharing Environment - Operating Systems): a model based on Virtual Desktop Infrastructure (VDI) which associates virtualization techniques and safe remote access systems to create a distributed architecture as a base for a managing system. WSE-OS is capable of accomplishing the replication of operating system images using wireless communication network, besides offering abstraction of hardware to its clients, making the management more flexible and independent of wired connections. Results obtained from this work indicate that WSE-OS allows disseminating, through a single software configuration, the execution of data related to operating system images in client computers. WSE-OS can also be used as a management tool for operating systems in a wireless network.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Computação - IBILCE

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Modern cloud-based applications and infrastructures may include resources and services (components) from multiple cloud providers, are heterogeneous by nature and require adjustment, composition and integration. The specific application requirements can be met with difficulty by the current static predefined cloud integration architectures and models. In this paper, we propose the Intercloud Operations and Management Framework (ICOMF) as part of the more general Intercloud Architecture Framework (ICAF) that provides a basis for building and operating a dynamically manageable multi-provider cloud ecosystem. The proposed ICOMF enables dynamic resource composition and decomposition, with a main focus on translating business models and objectives to cloud services ensembles. Our model is user-centric and focuses on the specific application execution requirements, by leveraging incubating virtualization techniques. From a cloud provider perspective, the ecosystem provides more insight into how to best customize the offerings of virtualized resources.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La formación y preparación constante del personal de TI es una de las estrategias más efectivas para mejorar la calidad, estabilidad y seguridad de las redes y servicios asociados. En esta línea, el CEDIA ha venido implementando cursos y talleres de capacitación dirigidos a sus miembros y, dentro del CSIRT-CEDIA, se ha pensado en la posibilidad de optimizar los procesos asociados al despliegue de la infraestructura necesaria para proveer a los participantes de éstas capacitaciones, con el material personalizado adecuado, en las áreas de seguridad informática. Es así que se decidió usar técnicas de virtualización para aprovechar los recursos disponibles, pero aun cuando esto en sí no es una tendencia nueva, el uso de una copia completa del disco virtual para cada participante, no sólo resulta impráctico en cuestión de tiempo, sino también en cuanto al consumo de almacenamiento necesario. Este trabajo se orienta justamente a la optimización en los tiempos y consumos asociados a los procesos de replicación de un mismo equipo y disco virtuales para uso particularizado de varios participantes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Virtualisoinnin ideana on kuvata tietotekniikkaan liittyvät laiteresurssit ryhminä. Kun jonkin tehtävän suoritukseen tarvitaan resursseja, ne kerätään erikseen jokaisesta ryhmästä. Virtualisoinnin yksi osa-alue on palvelimen tai palvelinten virtualisointi, jossa pyritään hyödyntämään palvelinlaitteisto mahdollisimman tehokkaasti. Tehokkuus saavutetaan käyttämällä erillisiä instansseja, joita kutsutaan virtuaalikoneiksi. Tässä diplomityössä esitellään ja verrataan erilaisia palvelinten virtualisointimalleja ja tekniikoita, joita voidaan käyttää IA-32 arkkitehtuurin kanssa. Eroa virtualisoinnin ja eri partitiointitekniikoiden välillä tarkastellaan erikseen. Lisäksi muutoksia, joita palvelinten virtualisointi aiheuttaa infrastruktuuriin, ympäristöön ja laitteistoon käsitellään yleisellä tasolla. Teorian oikeellisuutta todistettiin suorittamalla useita testejä käyttäen kahta eri virtualisointiohjelmistoa. Testien perusteella palvelinten virtualisointi vähentää suorituskykyä ja luo ympäristön, jonka hallitseminen on vaikeampaa verrattuna perinteiseen ympäristöön. Myös tietoturvaa on katsottava uudesta näkökulmasta, sillä fyysistä eristystä ei virtuaalikoneille voida toteuttaa. Jotta virtualisoinnista saataisiin mahdollisimman suuri hyöty tuotantoympäristössä, vaaditaan tarkkaa harkintaa ja suunnitelmallisuutta. Parhaat käyttökohteet ovat erilaiset testiympäristöt, joissa vaatimukset suorituskyvyn ja turvallisuuden suhteen eivät ole niin tarkat.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with optimization techniques and modeling of vehicular networks. Thanks to the models realized with the integer linear programming (ILP) and the heuristic ones, it was possible to study the performances in 5G networks for the vehicular. Thanks to Software-defined networking (SDN) and Network functions virtualization (NFV) paradigms it was possible to study the performances of different classes of service, such as the Ultra Reliable Low Latency Communications (URLLC) class and enhanced Mobile BroadBand (eMBB) class, and how the functional split can have positive effects on network resource management. Two different protection techniques have been studied: Shared Path Protection (SPP) and Dedicated Path Protection (DPP). Thanks to these different protections, it is possible to achieve different network reliability requirements, according to the needs of the end user. Finally, thanks to a simulator developed in Python, it was possible to study the dynamic allocation of resources in a 5G metro network. Through different provisioning algorithms and different dynamic resource management techniques, useful results have been obtained for understanding the needs in the vehicular networks that will exploit 5G. Finally, two models are shown for reconfiguring backup resources when using shared resource protection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this thesis work is the study and creation of a harness modelling system. The model needs to simulate faithfully the physical behaviour of the harness, without any instability or incorrect movements. Since there are various simulation engines that try to model wiring's systems, this thesis work focused on the creation and test of a 3D environment with wiring and other objects through the PyChrono Simulation Engine. Fine-tuning of the simulation parameters were done during the test to achieve the most stable and correct simulation possible, but tests showed the intrinsic limits of the Engine regarding the collisions' detection between the various part of the cables, while collisions between cables and other physical objects such as pavement, walls and others are well managed by the simulator. Finally, the main purpose of the model is to be used to train Artificial Intelligence through Reinforcement Learnings techniques, so we designed, using OpenAI Gym APIs, the general structure of the learning environment, defining its basic functions and an initial framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this investigation was to compare the skeletal stability of three different rigid fixation methods after mandibular advancement. Fifty-five class II malocclusion patients treated with the use of bilateral sagittal split ramus osteotomy and mandibular advancement were selected for this retrospective study. Group 1 (n = 17) had miniplates with monocortical screws, Group 2 (n = 16) had bicortical screws and Group 3 (n = 22) had the osteotomy fixed by means of the hybrid technique. Cephalograms were taken preoperatively, 1 week within the postoperative care period, and 6 months after the orthognathic surgery. Linear and angular changes of the cephalometric landmarks of the chin region were measured at each period, and the changes at each cephalometric landmark were determined for the time gaps. Postoperative changes in the mandibular shape were analyzed to determine the stability of fixation methods. There was minimum difference in the relapse of the mandibular advancement among the three groups. Statistical analysis showed no significant difference in postoperative stability. However, a positive correlation between the amount of advancement and the amount of postoperative relapse was demonstrated by the linear multiple regression test (p < 0.05). It can be concluded that all techniques can be used to obtain stable postoperative results in mandibular advancement after 6 months.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantification of dermal exposure to pesticides in rural workers, used in risk assessment, can be performed with different techniques such as patches or whole body evaluation. However, the wide variety of methods can jeopardize the process by producing disparate results, depending on the principles in sample collection. A critical review was thus performed on the main techniques for quantifying dermal exposure, calling attention to this issue and the need to establish a single methodology for quantification of dermal exposure in rural workers. Such harmonization of different techniques should help achieve safer and healthier working conditions. Techniques that can provide reliable exposure data are an essential first step towards avoiding harm to workers' health.