156 resultados para virtualization


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasing needs for computational power in areas such as weather simulation, genomics or Internet applications have led to sharing of geographically distributed and heterogeneous resources from commercial data centers and scientific institutions. Research in the areas of utility, grid and cloud computing, together with improvements in network and hardware virtualization has resulted in methods to locate and use resources to rapidly provision virtual environments in a flexible manner, while lowering costs for consumers and providers. However, there is still a lack of methodologies to enable efficient and seamless sharing of resources among institutions. In this work, we concentrate in the problem of executing parallel scientific applications across distributed resources belonging to separate organizations. Our approach can be divided in three main points. First, we define and implement an interoperable grid protocol to distribute job workloads among partners with different middleware and execution resources. Second, we research and implement different policies for virtual resource provisioning and job-to-resource allocation, taking advantage of their cooperation to improve execution cost and performance. Third, we explore the consequences of on-demand provisioning and allocation in the problem of site-selection for the execution of parallel workloads, and propose new strategies to reduce job slowdown and overall cost.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the exponential growth of the usage of web-based map services, the web GIS application has become more and more popular. Spatial data index, search, analysis, visualization and the resource management of such services are becoming increasingly important to deliver user-desired Quality of Service. First, spatial indexing is typically time-consuming and is not available to end-users. To address this, we introduce TerraFly sksOpen, an open-sourced an Online Indexing and Querying System for Big Geospatial Data. Integrated with the TerraFly Geospatial database [1-9], sksOpen is an efficient indexing and query engine for processing Top-k Spatial Boolean Queries. Further, we provide ergonomic visualization of query results on interactive maps to facilitate the user’s data analysis. Second, due to the highly complex and dynamic nature of GIS systems, it is quite challenging for the end users to quickly understand and analyze the spatial data, and to efficiently share their own data and analysis results with others. Built on the TerraFly Geo spatial database, TerraFly GeoCloud is an extra layer running upon the TerraFly map and can efficiently support many different visualization functions and spatial data analysis models. Furthermore, users can create unique URLs to visualize and share the analysis results. TerraFly GeoCloud also enables the MapQL technology to customize map visualization using SQL-like statements [10]. Third, map systems often serve dynamic web workloads and involve multiple CPU and I/O intensive tiers, which make it challenging to meet the response time targets of map requests while using the resources efficiently. Virtualization facilitates the deployment of web map services and improves their resource utilization through encapsulation and consolidation. Autonomic resource management allows resources to be automatically provisioned to a map service and its internal tiers on demand. v-TerraFly are techniques to predict the demand of map workloads online and optimize resource allocations, considering both response time and data freshness as the QoS target. The proposed v-TerraFly system is prototyped on TerraFly, a production web map service, and evaluated using real TerraFly workloads. The results show that v-TerraFly can accurately predict the workload demands: 18.91% more accurate; and efficiently allocate resources to meet the QoS target: improves the QoS by 26.19% and saves resource usages by 20.83% compared to traditional peak load-based resource allocation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The telecommunication industry is entering a new era. The increased traffic demands imposed by the huge number of always-on connections require a quantum leap in the field of enabling techniques. Furthermore, subscribers expect ever increasing quality of experience with its joys and wonders, while network operators and service providers aim for cost-efficient networks. These requirements require a revolutionary change in the telecommunications industry, as shown by the success of virtualization in the IT industry, which is now driving the deployment and expansion of cloud computing. Telecommunications providers are currently rethinking their network architecture from one consisting of a multitude of black boxes with specialized network hardware and software to a new architecture consisting of “white box” hardware running a multitude of specialized network software. This network software may be data plane software providing network functions virtualization (NVF) or control plane software providing centralized network management — software defined networking (SDN). It is expected that these architectural changes will permeate networks as wide ranging in size as the Internet core networks, to metro networks, to enterprise networks and as wide ranging in functionality as converged packet-optical networks, to wireless core networks, to wireless radio access networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Collaborative Anomaly Detection (CAD) is an emerging field of network security in both academia and industry. It has attracted a lot of attention, due to the limitations of traditional fortress-style defense modes. Even though a number of pioneer studies have been conducted in this area, few of them concern about the universality issue. This work focuses on two aspects of it. First, a unified collaborative detection framework is developed based on network virtualization technology. Its purpose is to provide a generic approach that can be applied to designing specific schemes for various application scenarios and objectives. Second, a general behavior perception model is proposed for the unified framework based on hidden Markov random field. Spatial Markovianity is introduced to model the spatial context of distributed network behavior and stochastic interaction among interconnected nodes. Algorithms are derived for parameter estimation, forward prediction, backward smooth, and the normality evaluation of both global network situation and local behavior. Numerical experiments using extensive simulations and several real datasets are presented to validate the proposed solution. Performance-related issues and comparison with related works are discussed.