14 resultados para virtualization

em Deakin Research Online - Australia


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mobile virtualization has emerged fairly recently and is considered a valuable way to mitigate security risks on Android devices. However, major challenges in mobile virtualization include runtime, hardware, resource overhead, and compatibility. In this paper, we propose a lightweight Android virtualization solution named Condroid, which is based on container technology. Condroid utilizes resource isolation based on namespaces feature and resource control based on cgroups feature. By leveraging them, Condroid can host multiple independent Android virtual machines on a single kernel to support mutilple Android containers. Furthermore, our implementation presents both a system service sharing mechanism to reduce memory utilization and a filesystem sharing mechanism to reduce storage usage. The evaluation results on Google Nexus 5 demonstrate that Condroid is feasible in terms of runtime, hardware resource overhead, and compatibility. Therefore, we find that Condroid has a higher performance than other virtualization solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multicore processors are widely used in today's computer systems. Multicore virtualization technology provides an elastic solution to more efficiently utilize the multicore system. However, the Lock Holder Preemption (LHP) problem in the virtualized multicore systems causes significant CPU cycles wastes, which hurt virtual machine (VM) performance and reduces response latency. The system consolidates more VMs, the LHP problem becomes worse. In this paper, we propose an efficient consolidation-aware vCPU (CVS) scheduling scheme on multicore virtualization platform. Based on vCPU over-commitment rate, the CVS scheduling scheme adaptively selects one algorithm among three vCPU scheduling algorithms: co-scheduling, yield-to-head, and yield-to-tail based on the vCPU over-commitment rate because the actions of vCPU scheduling are split into many single steps such as scheduling vCPUs simultaneously or inserting one vCPU into the run-queue from the head or tail. The CVS scheme can effectively improve VM performance in the low, middle, and high VM consolidation scenarios. Using real-life parallel benchmarks, our experimental results show that the proposed CVS scheme improves the overall system performance while the optimization overhead remains low.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Virtualization brought an immense commute in the modern technology especially in computer networks since last decade. The enormity of big data has led the massive graphs to be increased in size exponentially in recent years so that normal tools and algorithms are going weak to process it. Size diminution of the massive graphs is a big challenge in the current era and extraction of useful information from huge graphs is also problematic. In this paper, we presented a concept to design the virtual graph vGraph in the virtual plane above the original plane having original massive graph and proposed a novel cumulative similarity measure for vGraph. The use of vGraph is utile in lieu of massive graph in terms of space and time. Our proposed algorithm has two main parts. In the first part, virtual nodes are designed from the original nodes based on the calculation of cumulative similarity among them. In the second part, virtual edges are designed to link the virtual nodes based on the calculation of similarity measure among the original edges of the original massive graph. The algorithm is tested on synthetic and real-world datasets which shows the efficiency of our proposed algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While the nascent Cloud Computing paradigm supported by virtualization has the upward new notion of edges, it lacks proper security and trust mechanisms. Edges are like on demand scalability and infinite resource provisioning as per the `pay-as-you-go' manner in favour of a single information owner (abbreviated as INO from now onwards) to multiple corporate INOs. While outsourcing information to a cloud storage controlled by a cloud service provider (abbreviated as CSP from now onwards) relives an information owner of tackling instantaneous oversight and management needs, a significant issue of retaining the control of that information to the information owner still needs to be solved. This paper perspicaciously delves into the facts of the Cloud Computing security issues and aims to explore and establish a secure channel for the INO to communicate with the CSP while maintaining trust and confidentiality. The objective of the paper is served by analyzing different protocols and proposing the one in commensurate with the requirement of the security property like information or data confidentiality along the line of security in Cloud Computing Environment (CCE). To the best of our knowledge, we are the first to derive a secure protocol by successively eliminating the dangling pitfalls that remain dormant and thereby hamper confidentiality and integrity of information that is worth exchanging between the INO and the CSP. Besides, conceptually, our derived protocol is compared with the SSL from the perspectives of work flow related activities along the line of secure trusted path for information confidentiality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The emergence of cloud computing has caused a significant change in how IT infrastructures are provided to research and business organizations. Instead of paying for expensive hardware and incur excessive maintenance costs, it is now possible to rent the IT infrastructure of other organizations for a minimal fee. While the existence of cloud computing is new. The elements used to create clouds have been around for some time. Cloud computing systems have been made possible through the use of large-scale clusters, service-oriented architecture (SOA), Web services, and virtualization. While the idea of offering resources via Web services is commonplace in cloud computing, little attention has been paid to the clients themselves specifically, human operators. Despite that clouds host a variety of resources which in turn are accessible to a variety of clients, support for human users is minimal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasing amount of data collected in the fields of physics and bio-informatics allows researchers to build realistic, and therefore accurate, models/simulations and gain a deeper understanding of complex systems. This analysis is often at the cost of greatly increased processing requirements. Cloud computing, which provides on demand resources, can offset increased analysis requirements. While beneficial to researchers, adaption of clouds has been slow due to network and performance uncertainties. We compare the performance of cloud computers to clusters to make clear the advantages and limitations of clouds. Focus has been put on understanding how virtualization and the underlying network effects performance of High Performance Computing (HPC) applications. Collected results indicate that performance comparable to high performance clusters is achievable on cloud computers depending on the type of application run.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Networking of computing devices has been going through rapid evolution and thus continuing to be an ever expanding area of importance in recent years. New technologies, protocols, services and usage patterns have contributed to the major research interests in this area of computer science. The current special issue is an effort to bring forward some of these interesting developments that are being pursued by researchers at present in different parts of the globe. Our objective is to provide the readership with some insight into the latest innovations in computer networking through this. This Special Issue presents selected papers from the thirteenth conference of the series (ICCIT 2010) held during December 23-25, 2010 at the Ahsanullah University of Science and Technology. The first ICCIT was held in Dhaka, Bangladesh, in 1998. Since then the conference has grown to be one of the largest computer and IT related research conferences in the South Asian region, with participation of academics and researchers from many countries around the world. Starting in 2008 the proceedings of ICCIT are included in IEEExplore. In 2010, a total of 410 full papers were submitted to the conference of which 136 were accepted after reviews conducted by an international program committee comprising 81 members from 16 countries. This was tantamount to an acceptance rate of 33%. From these 136 papers, 14 highly ranked manuscripts were invited for this Special Issue. The authors were advised to enhance their papers significantly and submit them to undergo review for suitability of inclusion into this publication. Of those, eight papers survived the review process and have been selected for inclusion in this Special Issue. The authors of these papers represent academic and/or research institutions from Australia, Bangladesh, Japan, Korea and USA. These papers address issues concerning different domains of networks namely, optical fiber communication, wireless and interconnection networks, issues related to networking hardware and software and network mobility. The paper titled “Virtualization in Wireless Sensor Network: Challenges and Opportunities” argues in favor of bringing in different heterogeneous sensors under a common virtual framework so that the issues like flexibility, diversity, management and security can be handled practically. The authors Md. Motaharul Islam and Eui-Num Huh propose an architecture for sensor virtualization. They also present the current status and the challenges and opportunities for further research on the topic. The manuscript “Effect of Polarization Mode Dispersion on the BER Performance of Optical CDMA” deals with impact of polarization mode dispersion on the bit error rate performance of direct sequence optical code division multiple access. The authors, Md. Jahedul Islam and Md. Rafiqul Islam present an analytical approach toward determining the impact of different performance parameters. The authors show that the bit error rate performance improves significantly by the third order polarization mode dispersion than its first or second order counterparts. The authors Md. Shohrab Hossain, Mohammed Atiquzzaman and William Ivancic of the paper “Cost and Efficiency Analysis of NEMO Protocol Entities” present an analytical model for estimating the cost incurred by major mobility entities of a NEMO. The authors define a new metric for cost calculation in the process. Both the newly developed metric and the analytical model are likely to be useful to network engineers in estimating the resource requirement at the key entities while designing such a network. The article titled “A Highly Flexible LDPC Decoder using Hierarchical Quasi-Cyclic Matrix with Layered Permutation” deals with Low Density Parity Check decoders. The authors, Vikram Arkalgud Chandrasetty and Syed Mahfuzul Aziz propose a novel multi-level structured hierarchical matrix approach for generating codes of different lengths flexibly depending upon the requirement of the application. The manuscript “Analysis of Performance Limitations in Fiber Bragg Grating Based Optical Add-Drop Multiplexer due to Crosstalk” has been contributed by M. Mahiuddin and M. S. Islam. The paper proposes a new method of handling crosstalk with a fiber Bragg grating based optical add drop multiplexer (OADM). The authors show with an analytical model that different parameters improve using their proposed OADM. The paper “High Performance Hierarchical Torus Network Under Adverse Traffic Patterns” addresses issues related to hierarchical torus network (HTN) under adverse traffic patterns. The authors, M.M. Hafizur Rahman, Yukinori Sato, and Yasushi Inoguchi observe that dynamic communication performance of an HTN under adverse traffic conditions has not yet been addressed. The authors evaluate the performance of HTN for comparison with some other relevant networks. It is interesting to see that HTN outperforms these counterparts in terms of throughput and data transfer under adverse traffic. The manuscript titled “Dynamic Communication Performance Enhancement in Hierarchical Torus Network by Selection Algorithm” has been contributed by M.M. Hafizur Rahman, Yukinori Sato, and Yasushi Inoguchi. The authors introduce three simple adapting routing algorithms for efficient use of physical links and virtual channels in hierarchical torus network. The authors show that their approaches yield better performance for such networks. The final title “An Optimization Technique for Improved VoIP Performance over Wireless LAN” has been contributed by five authors, namely, Tamal Chakraborty, Atri Mukhopadhyay, Suman Bhunia, Iti Saha Misra and Salil K. Sanyal. The authors propose an optimization technique for configuring the parameters of the access points. In addition, they come up with an optimization mechanism in order to tune the threshold of active queue management system appropriately. Put together, the mechanisms improve the VoIP performance significantly under congestion. Finally, the Guest Editors would like to express their sincere gratitude to the 15 reviewers besides the guest editors themselves (Khalid M. Awan, Mukaddim Pathan, Ben Townsend, Morshed Chowdhury, Iftekhar Ahmad, Gour Karmakar, Shivali Goel, Hairulnizam Mahdin, Abdullah A Yusuf, Kashif Sattar, A.K.M. Azad, F. Rahman, Bahman Javadi, Abdelrahman Desoky, Lenin Mehedy) from several countries (Australia, Bangladesh, Japan, Pakistan, UK and USA) who have given immensely to this process. They have responded to the Guest Editors in the shortest possible time and dedicated their valuable time to ensure that the Special Issue contains high-quality papers with significant novelty and contributions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As clouds have been deployed widely in various fields, the reliability and availability of clouds become the major concern of cloud service providers and users. Thereby, fault tolerance in clouds receives a great deal of attention in both industry and academia, especially for real-time applications due to their safety critical nature. Large amounts of researches have been conducted to realize fault tolerance in distributed systems, among which fault-tolerant scheduling plays a significant role. However, few researches on the fault-tolerant scheduling study the virtualization and the elasticity, two key features of clouds, sufficiently. To address this issue, this paper presents a fault-tolerant mechanism which extends the primary-backup model to incorporate the features of clouds. Meanwhile, for the first time, we propose an elastic resource provisioning mechanism in the fault-tolerant context to improve the resource utilization. On the basis of the fault-tolerant mechanism and the elastic resource provisioning mechanism, we design novel fault-tolerant elastic scheduling algorithms for real-time tasks in clouds named FESTAL, aiming at achieving both fault tolerance and high resource utilization in clouds. Extensive experiments injecting with random synthetic workloads as well as the workload from the latest version of the Google cloud tracelogs are conducted by CloudSim to compare FESTAL with three baseline algorithms, i.e., Non-M igration-FESTAL (NMFESTAL), Non-Overlapping-FESTAL (NOFESTAL), and Elastic First Fit (EFF). The experimental results demonstrate that FESTAL is able to effectively enhance the performance of virtualized clouds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Smartphone technology has become more popular and innovative over the last few years, and technology companies are now introducing wearable devices into the market. By emerging and converging with technologies such as Cloud, Internet of Things (IoT) and Virtualization, requirements to personal sensor devices are immense and essential to support existing networks, e.g. mobile health (mHealth) as well as IoT users. Traditional physiological and biological medical sensors in mHealth provide health data either periodically or on-demand. Both of these situations can cause rapid battery consumption, consume significant bandwidth, and raise privacy issues, because these sensors do not consider or understand sensor status when converged together. The aim of this research is to provide a novel approach and solution to managing and controlling personal sensors that can be used in various areas such as the health, military, aged care, IoT and sport. This paper presents an inference system to transfer health data collected by personal sensors efficiently and effectively to other networks in a secure and effective manner without burdening workload on sensor devices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cloud computing is proposed as an open and promising computing paradigm where customers can deploy and utilize IT services in a pay-as-you-go fashion while saving huge capital investment in their own IT infrastructure. Due to the openness and virtualization, various malicious service providers may exist in these cloud environments, and some of them may record service data from a customer and then collectively deduce the customer's private information without permission. Therefore, from the perspective of cloud customers, it is essential to take certain technical actions to protect their privacy at client side. Noise obfuscation is an effective approach in this regard by utilizing noise data. For instance, noise service requests can be generated and injected into real customer service requests so that malicious service providers would not be able to distinguish which requests are real ones if these requests' occurrence probabilities are about the same, and consequently related customer privacy can be protected. Currently, existing representative noise generation strategies have not considered possible fluctuations of occurrence probabilities. In this case, the probability fluctuation could not be concealed by existing noise generation strategies, and it is a serious risk for the customer's privacy. To address this probability fluctuation privacy risk, we systematically develop a novel time-series pattern based noise generation strategy for privacy protection on cloud. First, we analyze this privacy risk and present a novel cluster based algorithm to generate time intervals dynamically. Then, based on these time intervals, we investigate corresponding probability fluctuations and propose a novel time-series pattern based forecasting algorithm. Lastly, based on the forecasting algorithm, our novel noise generation strategy can be presented to withstand the probability fluctuation privacy risk. The simulation evaluation demonstrates that our strategy can significantly improve the effectiveness of such cloud privacy protection to withstand the probability fluctuation privacy risk.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Large enterprise software systems make many complex interactions with other services in their environment. Developing and testing for production-like conditions is therefore a very challenging task. Current approaches include emulation of dependent services using either explicit modelling or record-and-replay approaches. Models require deep knowl-edge of the target services while record-and-replay is limited in accuracy. Both face developmental and scaling issues. We present a new technique that improves the accuracy of record-and-replay approaches, without requiring prior knowledge of the service protocols. The approach uses Multiple Sequence Alignment to derive message prototypes from recorded system interactions and a scheme to match incoming request messages against prototypes to generate response messages. We use a modified Needleman-Wunsch algorithm for distance calculation during message matching. Our approach has shown greater than 99% accuracy for four evaluated enterprise system messaging protocols. The approach has been successfully integrated into the CA Service Virtualization commercial product to complement its existing techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The telecommunication industry is entering a new era. The increased traffic demands imposed by the huge number of always-on connections require a quantum leap in the field of enabling techniques. Furthermore, subscribers expect ever increasing quality of experience with its joys and wonders, while network operators and service providers aim for cost-efficient networks. These requirements require a revolutionary change in the telecommunications industry, as shown by the success of virtualization in the IT industry, which is now driving the deployment and expansion of cloud computing. Telecommunications providers are currently rethinking their network architecture from one consisting of a multitude of black boxes with specialized network hardware and software to a new architecture consisting of “white box” hardware running a multitude of specialized network software. This network software may be data plane software providing network functions virtualization (NVF) or control plane software providing centralized network management — software defined networking (SDN). It is expected that these architectural changes will permeate networks as wide ranging in size as the Internet core networks, to metro networks, to enterprise networks and as wide ranging in functionality as converged packet-optical networks, to wireless core networks, to wireless radio access networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Collaborative Anomaly Detection (CAD) is an emerging field of network security in both academia and industry. It has attracted a lot of attention, due to the limitations of traditional fortress-style defense modes. Even though a number of pioneer studies have been conducted in this area, few of them concern about the universality issue. This work focuses on two aspects of it. First, a unified collaborative detection framework is developed based on network virtualization technology. Its purpose is to provide a generic approach that can be applied to designing specific schemes for various application scenarios and objectives. Second, a general behavior perception model is proposed for the unified framework based on hidden Markov random field. Spatial Markovianity is introduced to model the spatial context of distributed network behavior and stochastic interaction among interconnected nodes. Algorithms are derived for parameter estimation, forward prediction, backward smooth, and the normality evaluation of both global network situation and local behavior. Numerical experiments using extensive simulations and several real datasets are presented to validate the proposed solution. Performance-related issues and comparison with related works are discussed.