155 resultados para virtualization


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 5th generation of mobile networking introduces the concept of “Network slicing”, the network will be “sliced” horizontally, each slice will be compliant with different requirements in terms of network parameters such as bandwidth, latency. This technology is built on logical instead of physical resources, relies on virtual network as main concept to retrieve a logical resource. The Network Function Virtualisation provides the concept of logical resources for a virtual network function, enabling the concept virtual network; it relies on the Software Defined Networking as main technology to realize the virtual network as resource, it also define the concept of virtual network infrastructure with all components needed to enable the network slicing requirements. SDN itself uses cloud computing technology to realize the virtual network infrastructure, NFV uses also the virtual computing resources to enable the deployment of virtual network function instead of having custom hardware and software for each network function. The key of network slicing is the differentiation of slice in terms of Quality of Services parameters, which relies on the possibility to enable QoS management in cloud computing environment. The QoS in cloud computing denotes level of performances, reliability and availability offered. QoS is fundamental for cloud users, who expect providers to deliver the advertised quality characteristics, and for cloud providers, who need to find the right tradeoff between QoS levels that has possible to offer and operational costs. While QoS properties has received constant attention before the advent of cloud computing, performance heterogeneity and resource isolation mechanisms of cloud platforms have significantly complicated QoS analysis and deploying, prediction, and assurance. This is prompting several researchers to investigate automated QoS management methods that can leverage the high programmability of hardware and software resources in the cloud.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern embedded applications typically integrate a multitude of functionalities with potentially different criticality levels into a single system. Without appropriate preconditions, the integration of mixed-criticality subsystems can lead to a significant and potentially unacceptable increase of engineering and certification costs. A promising solution is to incorporate mechanisms that establish multiple partitions with strict temporal and spatial separation between the individual partitions. In this approach, subsystems with different levels of criticality can be placed in different partitions and can be verified and validated in isolation. The MultiPARTES FP7 project aims at supporting mixed- criticality integration for embedded systems based on virtualization techniques for heterogeneous multicore processors. A major outcome of the project is the MultiPARTES XtratuM, an open source hypervisor designed as a generic virtualization layer for heterogeneous multicore. MultiPARTES evaluates the developed technology through selected use cases from the offshore wind power, space, visual surveillance, and automotive domains. The impact of MultiPARTES on the targeted domains will be also discussed. In a number of ongoing research initiatives (e.g., RECOMP, ARAMIS, MultiPARTES, CERTAINTY) mixed-criticality integration is considered in multicore processors. Key challenges are the combination of software virtualization and hardware segregation and the extension of partitioning mechanisms to jointly address significant non-functional requirements (e.g., time, energy and power budgets, adaptivity, reliability, safety, security, volume, weight, etc.) along with development and certification methodology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Virtualization techniques have received increased attention in the field of embedded real-time systems. Such techniques provide a set of virtual machines that run on a single hardware platform, thus allowing several application programs to be executed as though they were running on separate machines, with isolated memory spaces and a fraction of the real processor time available to each of them.This papers deals with some problems that arise when implementing real-time systems written in Ada on a virtual machine. The effects of virtualization on the performance of the Ada real-time services are analysed, and requirements for the virtualization layer are derived. Virtual-machine time services are also defined in order to properly support Ada real-time applications. The implementation of the ORK+ kernel on the XtratuM supervisor is used as an example.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wireless Sensor Networks (WSNs) are generally used to collect information from the environment. The gathered data are delivered mainly to sinks or gateways that become the endpoints where applications can retrieve and process such data. However, applications would also expect from a WSN an event-driven operational model, so that they can be notified whenever occur some specific environmental changes instead of continuously analyzing the data provided periodically. In either operational model, WSNs represent a collection of interconnected objects, as outlined by the Internet of Things. Additionally, in order to fulfill the Internet of Things principles, Wireless Sensor Networks must have a virtual representation that allows indirect access to their resources, a model that should also include the virtualization of event sources in a WSN. Thus, in this paper a model for a virtual representation of event sources in a WSN is proposed. They are modeled as internet resources that are accessible by any internet application, following an Internet of Things approach. The model has been tested in a real implementation where a WSN has been deployed in an open neighborhood environment. Different event sources have been identified in the proposed scenario, and they have been represented following the proposed model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La iniciativa FIWARE ofrece un conjunto de APIs potentes que proporcionan la base para una innovación rápida y eficiente en el Internet del Futuro. Estas APIs son clave en el desarrollo de aplicaciones que usan tecnologías muy recientes e innovadoras, como el Internet de las cosas o la Gestión de Identidad en módulos de seguridad. Este documento presenta el desarrollo de una aplicación web de FIWARE usando componentes virtualizados en máquinas virtuales. La aplicación web está basada en “la fábrica de chocolate de Willy Wonka” como una implementación metafórica de una aplicación de seguridad e IoT en un entorno industrial. El componente principal es un servidor web en node.js que conecta con varios componentes de FIWARE, conocidos como “Generic Enablers”. La implementación está compuesta por dos módulos principales: el módulo de IoT y el módulo de seguridad. El módulo de IoT gestiona los sensores instalados por Willy Wonka en las salas de fábrica para monitorizar varios parámetros como, por ejemplo, la temperatura, la presión o la ocupación. El módulo de IoT crea y recibe información de contexto de los sensores virtuales. Esta información de contexto es gestionada y almacenada en un componente de FIWARE conocido como Context Broker. El Context Broker está basado en mecanismos de subscripciones que postean los datos de los sensores en la aplicación, en tiempo real y cuando estos cambian. La conexión con el cliente se produce mediante Web Sockets (socket.io). El módulo de seguridad gestiona las cuentas y la información de los usuarios, les autentica en la aplicación usando una cuenta de FIWARE y comprueba la autorización para acceder a distintos recursos. Distintos roles son creados con distintos permisos asignados. Por ejemplo, Willy Wonka puede tener acceso a todos los recursos, mientras que un Oompa Loopa encargado de la sala del chocolate solo deberías de tener acceso a los recursos de su sala. Este módulo está compuesto por tres componentes: el Gestor de Identidades, el PEP Proxy y el PDP AuthZForce. El gestor de identidades almacena las cuentas de FIWARE de los usuarios y permite la autenticación Single Sing On usando el protocolo OAuth2. Tras logearse, los usuarios autenticados reciben un token de autenticación que es usado después por el AuthZForce para comprobar el rol y permiso asociado del usuario. El PEP Proxy actúa como un servidor proxy que redirige las peticiones permitidas y bloquea las no autorizadas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Designing educational resources allow students to modify their learning process. In particular, on-line and downloadable educational resources have been successfully used in engineering education the last years [1]. Usually, these resources are free and accessible from web. In addition, they are designed and developed by lecturers and used by their students. But, they are rarely developed by students in order to be used by other students. In this work-in-progress, lecturers and students are working together to implement educational resources, which can be used by students to improve the learning process of computer networks subject in engineering studies. In particular, network topologies to model LAN (Local Area Network) and MAN (Metropolitan Area Network) are virtualized in order to simulate the behavior of the links and nodes when they are interconnected with different physical and logical design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to shrinking budgets and new demands for technology, Scottsdale Community College (SCC) IT department needed an effective, sustainable solution that would provide ubiquitous access to technology for students, faculty, and staff, both on- and off-campus. This paper explores how SCC implemented a complete virtualized computing environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The astonishing development of diverse and different hardware platforms is twofold: on one side, the challenge for the exascale performance for big data processing and management; on the other side, the mobile and embedded devices for data collection and human machine interaction. This drove to a highly hierarchical evolution of programming models. GVirtuS is the general virtualization system developed in 2009 and firstly introduced in 2010 enabling a completely transparent layer among GPUs and VMs. This paper shows the latest achievements and developments of GVirtuS, now supporting CUDA 6.5, memory management and scheduling. Thanks to the new and improved remoting capabilities, GVirtus now enables GPU sharing among physical and virtual machines based on x86 and ARM CPUs on local workstations,computing clusters and distributed cloud appliances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this thesis work is the study and creation of a harness modelling system. The model needs to simulate faithfully the physical behaviour of the harness, without any instability or incorrect movements. Since there are various simulation engines that try to model wiring's systems, this thesis work focused on the creation and test of a 3D environment with wiring and other objects through the PyChrono Simulation Engine. Fine-tuning of the simulation parameters were done during the test to achieve the most stable and correct simulation possible, but tests showed the intrinsic limits of the Engine regarding the collisions' detection between the various part of the cables, while collisions between cables and other physical objects such as pavement, walls and others are well managed by the simulator. Finally, the main purpose of the model is to be used to train Artificial Intelligence through Reinforcement Learnings techniques, so we designed, using OpenAI Gym APIs, the general structure of the learning environment, defining its basic functions and an initial framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cloud computing is increasingly being adopted in different scenarios, like social networking, business applications, scientific experiments, etc. Relying in virtualization technology, the construction of these computing environments targets improvements in the infrastructure, such as power-efficiency and fulfillment of users’ SLA specifications. The methodology usually applied is packing all the virtual machines on the proper physical servers. However, failure occurrences in these networked computing systems can induce substantial negative impact on system performance, deviating the system from ours initial objectives. In this work, we propose adapted algorithms to dynamically map virtual machines to physical hosts, in order to improve cloud infrastructure power-efficiency, with low impact on users’ required performance. Our decision making algorithms leverage proactive fault-tolerance techniques to deal with systems failures, allied with virtual machine technology to share nodes resources in an accurately and controlled manner. The results indicate that our algorithms perform better targeting power-efficiency and SLA fulfillment, in face of cloud infrastructure failures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ECER 2014 "The Past, the Present and Future of Educational Research in Europe" will take place at the University of Porto from 1 - 5 September 2014.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ECER 2015 "Education and Transition - Contributions from Educational Research", Corvinus University of Budapest from 7 to 11 September 2015.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The constant evolution of the Internet and its increasing use and subsequent entailing to private and public activities, resulting in a strong impact on their survival, originates an emerging technology. Through cloud computing, it is possible to abstract users from the lower layers to the business, focusing only on what is most important to manage and with the advantage of being able to grow (or degrades) resources as needed. The paradigm of cloud arises from the necessity of optimization of IT resources evolving in an emergent and rapidly expanding and technology. In this regard, after a study of the most common cloud platforms and the tactic of the current implementation of the technologies applied at the Institute of Biomedical Sciences of Abel Salazar and Faculty of Pharmacy of Oporto University a proposed evolution is suggested in order adorn certain requirements in the context of cloud computing.