156 resultados para virtualization


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 5th generation of mobile networking introduces the concept of “Network slicing”, the network will be “sliced” horizontally, each slice will be compliant with different requirements in terms of network parameters such as bandwidth, latency. This technology is built on logical instead of physical resources, relies on virtual network as main concept to retrieve a logical resource. The Network Function Virtualisation provides the concept of logical resources for a virtual network function, enabling the concept virtual network; it relies on the Software Defined Networking as main technology to realize the virtual network as resource, it also define the concept of virtual network infrastructure with all components needed to enable the network slicing requirements. SDN itself uses cloud computing technology to realize the virtual network infrastructure, NFV uses also the virtual computing resources to enable the deployment of virtual network function instead of having custom hardware and software for each network function. The key of network slicing is the differentiation of slice in terms of Quality of Services parameters, which relies on the possibility to enable QoS management in cloud computing environment. The QoS in cloud computing denotes level of performances, reliability and availability offered. QoS is fundamental for cloud users, who expect providers to deliver the advertised quality characteristics, and for cloud providers, who need to find the right tradeoff between QoS levels that has possible to offer and operational costs. While QoS properties has received constant attention before the advent of cloud computing, performance heterogeneity and resource isolation mechanisms of cloud platforms have significantly complicated QoS analysis and deploying, prediction, and assurance. This is prompting several researchers to investigate automated QoS management methods that can leverage the high programmability of hardware and software resources in the cloud.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern embedded applications typically integrate a multitude of functionalities with potentially different criticality levels into a single system. Without appropriate preconditions, the integration of mixed-criticality subsystems can lead to a significant and potentially unacceptable increase of engineering and certification costs. A promising solution is to incorporate mechanisms that establish multiple partitions with strict temporal and spatial separation between the individual partitions. In this approach, subsystems with different levels of criticality can be placed in different partitions and can be verified and validated in isolation. The MultiPARTES FP7 project aims at supporting mixed- criticality integration for embedded systems based on virtualization techniques for heterogeneous multicore processors. A major outcome of the project is the MultiPARTES XtratuM, an open source hypervisor designed as a generic virtualization layer for heterogeneous multicore. MultiPARTES evaluates the developed technology through selected use cases from the offshore wind power, space, visual surveillance, and automotive domains. The impact of MultiPARTES on the targeted domains will be also discussed. In a number of ongoing research initiatives (e.g., RECOMP, ARAMIS, MultiPARTES, CERTAINTY) mixed-criticality integration is considered in multicore processors. Key challenges are the combination of software virtualization and hardware segregation and the extension of partitioning mechanisms to jointly address significant non-functional requirements (e.g., time, energy and power budgets, adaptivity, reliability, safety, security, volume, weight, etc.) along with development and certification methodology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Virtualization techniques have received increased attention in the field of embedded real-time systems. Such techniques provide a set of virtual machines that run on a single hardware platform, thus allowing several application programs to be executed as though they were running on separate machines, with isolated memory spaces and a fraction of the real processor time available to each of them.This papers deals with some problems that arise when implementing real-time systems written in Ada on a virtual machine. The effects of virtualization on the performance of the Ada real-time services are analysed, and requirements for the virtualization layer are derived. Virtual-machine time services are also defined in order to properly support Ada real-time applications. The implementation of the ORK+ kernel on the XtratuM supervisor is used as an example.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wireless Sensor Networks (WSNs) are generally used to collect information from the environment. The gathered data are delivered mainly to sinks or gateways that become the endpoints where applications can retrieve and process such data. However, applications would also expect from a WSN an event-driven operational model, so that they can be notified whenever occur some specific environmental changes instead of continuously analyzing the data provided periodically. In either operational model, WSNs represent a collection of interconnected objects, as outlined by the Internet of Things. Additionally, in order to fulfill the Internet of Things principles, Wireless Sensor Networks must have a virtual representation that allows indirect access to their resources, a model that should also include the virtualization of event sources in a WSN. Thus, in this paper a model for a virtual representation of event sources in a WSN is proposed. They are modeled as internet resources that are accessible by any internet application, following an Internet of Things approach. The model has been tested in a real implementation where a WSN has been deployed in an open neighborhood environment. Different event sources have been identified in the proposed scenario, and they have been represented following the proposed model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La iniciativa FIWARE ofrece un conjunto de APIs potentes que proporcionan la base para una innovación rápida y eficiente en el Internet del Futuro. Estas APIs son clave en el desarrollo de aplicaciones que usan tecnologías muy recientes e innovadoras, como el Internet de las cosas o la Gestión de Identidad en módulos de seguridad. Este documento presenta el desarrollo de una aplicación web de FIWARE usando componentes virtualizados en máquinas virtuales. La aplicación web está basada en “la fábrica de chocolate de Willy Wonka” como una implementación metafórica de una aplicación de seguridad e IoT en un entorno industrial. El componente principal es un servidor web en node.js que conecta con varios componentes de FIWARE, conocidos como “Generic Enablers”. La implementación está compuesta por dos módulos principales: el módulo de IoT y el módulo de seguridad. El módulo de IoT gestiona los sensores instalados por Willy Wonka en las salas de fábrica para monitorizar varios parámetros como, por ejemplo, la temperatura, la presión o la ocupación. El módulo de IoT crea y recibe información de contexto de los sensores virtuales. Esta información de contexto es gestionada y almacenada en un componente de FIWARE conocido como Context Broker. El Context Broker está basado en mecanismos de subscripciones que postean los datos de los sensores en la aplicación, en tiempo real y cuando estos cambian. La conexión con el cliente se produce mediante Web Sockets (socket.io). El módulo de seguridad gestiona las cuentas y la información de los usuarios, les autentica en la aplicación usando una cuenta de FIWARE y comprueba la autorización para acceder a distintos recursos. Distintos roles son creados con distintos permisos asignados. Por ejemplo, Willy Wonka puede tener acceso a todos los recursos, mientras que un Oompa Loopa encargado de la sala del chocolate solo deberías de tener acceso a los recursos de su sala. Este módulo está compuesto por tres componentes: el Gestor de Identidades, el PEP Proxy y el PDP AuthZForce. El gestor de identidades almacena las cuentas de FIWARE de los usuarios y permite la autenticación Single Sing On usando el protocolo OAuth2. Tras logearse, los usuarios autenticados reciben un token de autenticación que es usado después por el AuthZForce para comprobar el rol y permiso asociado del usuario. El PEP Proxy actúa como un servidor proxy que redirige las peticiones permitidas y bloquea las no autorizadas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Designing educational resources allow students to modify their learning process. In particular, on-line and downloadable educational resources have been successfully used in engineering education the last years [1]. Usually, these resources are free and accessible from web. In addition, they are designed and developed by lecturers and used by their students. But, they are rarely developed by students in order to be used by other students. In this work-in-progress, lecturers and students are working together to implement educational resources, which can be used by students to improve the learning process of computer networks subject in engineering studies. In particular, network topologies to model LAN (Local Area Network) and MAN (Metropolitan Area Network) are virtualized in order to simulate the behavior of the links and nodes when they are interconnected with different physical and logical design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to shrinking budgets and new demands for technology, Scottsdale Community College (SCC) IT department needed an effective, sustainable solution that would provide ubiquitous access to technology for students, faculty, and staff, both on- and off-campus. This paper explores how SCC implemented a complete virtualized computing environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The astonishing development of diverse and different hardware platforms is twofold: on one side, the challenge for the exascale performance for big data processing and management; on the other side, the mobile and embedded devices for data collection and human machine interaction. This drove to a highly hierarchical evolution of programming models. GVirtuS is the general virtualization system developed in 2009 and firstly introduced in 2010 enabling a completely transparent layer among GPUs and VMs. This paper shows the latest achievements and developments of GVirtuS, now supporting CUDA 6.5, memory management and scheduling. Thanks to the new and improved remoting capabilities, GVirtus now enables GPU sharing among physical and virtual machines based on x86 and ARM CPUs on local workstations,computing clusters and distributed cloud appliances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Virtualization brought an immense commute in the modern technology especially in computer networks since last decade. The enormity of big data has led the massive graphs to be increased in size exponentially in recent years so that normal tools and algorithms are going weak to process it. Size diminution of the massive graphs is a big challenge in the current era and extraction of useful information from huge graphs is also problematic. In this paper, we presented a concept to design the virtual graph vGraph in the virtual plane above the original plane having original massive graph and proposed a novel cumulative similarity measure for vGraph. The use of vGraph is utile in lieu of massive graph in terms of space and time. Our proposed algorithm has two main parts. In the first part, virtual nodes are designed from the original nodes based on the calculation of cumulative similarity among them. In the second part, virtual edges are designed to link the virtual nodes based on the calculation of similarity measure among the original edges of the original massive graph. The algorithm is tested on synthetic and real-world datasets which shows the efficiency of our proposed algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Queensland University of Technology (QUT) is a large multidisciplinary university located in Brisbane, Queensland, Australia. QUT is increasing its research focus and is developing its research support services. It has adopted a model of collaboration between the Library, High Performance Computing and Research Support (HPC) and more broadly with Information Technology Services (ITS). Research support services provided by the Library include the provision of information resources and discovery services, bibliographic management software, assistance with publishing (publishing strategies, identifying high impact journals, dealing with publishers and the peer review process), citation analysis and calculating authors’ H Index. Research data management services are being developed by the Library and HPC working in collaboration. The HPC group within ITS supports research computing infrastructure, research development and engagement activities, researcher consultation, high speed computation and data storage systems , 2D/ 3D (immersive) visualisation tools, parallelisation and optimization of research codes, statistics/ data modeling training and support (both qualitative and quantitative) and support for the university’s central Access Grid collaboration facility. Development and engagement activities include participation in research grants and papers, student supervision and internships and the sponsorship, incubation and adoption of new computing technologies for research. ITS also provides other services that support research including ICT training, research infrastructure (networking, data storage, federated access and authorization, virtualization) and corporate systems for research administration. Seminars and workshops are offered to increase awareness and uptake of new and existing services. A series of online surveys on eResearch practices and skills and a number of focus groups was conducted to better inform the development of research support services. Progress towards the provision of research support is described within the context organizational frameworks; resourcing; infrastructure; integration; collaboration; change management; engagement; awareness and skills; new services; and leadership. Challenges to be addressed include the need to redeploy existing operational resources toward new research support services, supporting a rapidly growing research profile across the university, the growing need for the use and support of IT in research programs, finding capacity to address the diverse research support needs across the disciplines, operationalising new research support services following their implementation in project mode, embedding new specialist staff roles, cross-skilling Liaison Librarians, and ensuring continued collaboration between stakeholders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Virtual prototyping emerges as a new technology to replace existing physical prototypes for product evaluation, which are costly and time consuming to manufacture. Virtualization technology allows engineers and ergonomists to perform virtual builds and different ergonomic analyses on a product. Digital Human Modelling (DHM) software packages such as Siemens Jack, often integrate with CAD systems to provide a virtual environment which allows investigation of operator and product compatibility. Although the integration between DHM and CAD systems allows for the ergonomic analysis of anthropometric design, human musculoskeletal, multi-body modelling software packages such as the AnyBody Modelling System (AMS) are required to support physiologic design. They provide muscular force analysis, estimate human musculoskeletal strain and help address human comfort assessment. However, the independent characteristics of the modelling systems Jack and AMS constrain engineers and ergonomists in conducting a complete ergonomic analysis. AMS is a stand alone programming system without a capability to integrate into CAD environments. Jack is providing CAD integrated human-in-the-loop capability, but without considering musculoskeletal activity. Consequently, engineers and ergonomists need to perform many redundant tasks during product and process design. Besides, the existing biomechanical model in AMS uses a simplified estimation of body proportions, based on a segment mass ratio derived scaling approach. This is insufficient to represent user populations anthropometrically correct in AMS. In addition, sub-models are derived from different sources of morphologic data and are therefore anthropometrically inconsistent. Therefore, an interface between the biomechanical AMS and the virtual human model Jack was developed to integrate a musculoskeletal simulation with Jack posture modeling. This interface provides direct data exchange between the two man-models, based on a consistent data structure and common body model. The study assesses kinematic and biomechanical model characteristics of Jack and AMS, and defines an appropriate biomechanical model. The information content for interfacing the two systems is defined and a protocol is identified. The interface program is developed and implemented through Tcl and Jack-script(Python), and interacts with the AMS console application to operate AMS procedures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate existing cloud storage schemes and identify limitations in each one based on the security services that they provide. We then propose a new cloud storage architecture that extends CloudProof of Popa et al. to provide availability assurance. This is accomplished by incorporating a proof of storage protocol. As a result, we obtain the first secure storage cloud computing scheme that furnishes all three properties of availability, fairness and freshness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Server consolidation using virtualization technology has become an important technology to improve the energy efficiency of data centers. Virtual machine placement is the key in the server consolidation. In the past few years, many approaches to the virtual machine placement have been proposed. However, existing virtual machine placement approaches to the virtual machine placement problem consider the energy consumption by physical machines in a data center only, but do not consider the energy consumption in communication network in the data center. However, the energy consumption in the communication network in a data center is not trivial, and therefore should be considered in the virtual machine placement in order to make the data center more energy-efficient. In this paper, we propose a genetic algorithm for a new virtual machine placement problem that considers the energy consumption in both the servers and the communication network in the data center. Experimental results show that the genetic algorithm performs well when tackling test problems of different kinds, and scales up well when the problem size increases.