11 resultados para Virtual Machine

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Because of the strong demands of physical resources of big data, it is an effective and efficient way to store and process big data in clouds, as cloud computing allows on-demand resource provisioning. With the increasing requirements for the resources provisioned by cloud platforms, the Quality of Service (QoS) of cloud services for big data management is becoming significantly important. Big data has the character of sparseness, which leads to frequent data accessing and processing, and thereby causes huge amount of energy consumption. Energy cost plays a key role in determining the price of a service and should be treated as a first-class citizen as other QoS metrics, because energy saving services can achieve cheaper service prices and environmentally friendly solutions. However, it is still a challenge to efficiently schedule Virtual Machines (VMs) for service QoS enhancement in an energy-aware manner. In this paper, we propose an energy-aware dynamic VM scheduling method for QoS enhancement in clouds over big data to address the above challenge. Specifically, the method consists of two main VM migration phases where computation tasks are migrated to servers with lower energy consumption or higher performance to reduce service prices and execution time. Extensive experimental evaluation demonstrates the effectiveness and efficiency of our method.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In current cloud services hosting solutions, various mechanisms have been developed to minimize the possibility of hosting staff from breaching security. However, while functions such as replicating and moving machines are legitimate actions in clouds, we show that there are risks in administrators being able to perform them. We describe three threat scenarios related to hosting staff on the cloud architecture and indicate how an appropriate accountability architecture can mitigate these risks in the sense that the attacks can be detected and the perpetrators identified. We identify requirements and future research and development needed to protect cloud service environments from these attacks.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Big Data technologies are exciting cutting-edge technologies that generate, collect, store and analyse tremendous amount of data. Like any other IT revolution, Big Data technologies also have big challenges that are obstructing it to be adopted by wider community or perhaps impeding to extract value from Big Data with pace and accuracy it is promising. In this paper we first offer an alternative view of «Big Data Cloud» with the main aim to make this complex technology easy to understand for new researchers and identify gaps efficiently. In our lab experiment, we have successfully implemented cyber-attacks on Apache Hadoop's management interface «Ambari». On our thought about «attackers only need one way in», we have attacked the Apache Hadoop's management interface, successfully turned down all communication between Ambari and Hadoop's ecosystem and collected performance data from Ambari Virtual Machine (VM) and Big Data Cloud hypervisor. We have also detected these cyber-attacks with 94.0187% accurateness using modern machine learning algorithms. From the existing researchs, no one has ever attempted similar experimentation in detection of cyber-attacks on Hadoop using performance data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the explosion of big data, processing large numbers of continuous data streams, i.e., big data stream processing (BDSP), has become a crucial requirement for many scientific and industrial applications in recent years. By offering a pool of computation, communication and storage resources, public clouds, like Amazon's EC2, are undoubtedly the most efficient platforms to meet the ever-growing needs of BDSP. Public cloud service providers usually operate a number of geo-distributed datacenters across the globe. Different datacenter pairs are with different inter-datacenter network costs charged by Internet Service Providers (ISPs). While, inter-datacenter traffic in BDSP constitutes a large portion of a cloud provider's traffic demand over the Internet and incurs substantial communication cost, which may even become the dominant operational expenditure factor. As the datacenter resources are provided in a virtualized way, the virtual machines (VMs) for stream processing tasks can be freely deployed onto any datacenters, provided that the Service Level Agreement (SLA, e.g., quality-of-information) is obeyed. This raises the opportunity, but also a challenge, to explore the inter-datacenter network cost diversities to optimize both VM placement and load balancing towards network cost minimization with guaranteed SLA. In this paper, we first propose a general modeling framework that describes all representative inter-task relationship semantics in BDSP. Based on our novel framework, we then formulate the communication cost minimization problem for BDSP into a mixed-integer linear programming (MILP) problem and prove it to be NP-hard. We then propose a computation-efficient solution based on MILP. The high efficiency of our proposal is validated by extensive simulation based studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Multicore processors are widely used in today's computer systems. Multicore virtualization technology provides an elastic solution to more efficiently utilize the multicore system. However, the Lock Holder Preemption (LHP) problem in the virtualized multicore systems causes significant CPU cycles wastes, which hurt virtual machine (VM) performance and reduces response latency. The system consolidates more VMs, the LHP problem becomes worse. In this paper, we propose an efficient consolidation-aware vCPU (CVS) scheduling scheme on multicore virtualization platform. Based on vCPU over-commitment rate, the CVS scheduling scheme adaptively selects one algorithm among three vCPU scheduling algorithms: co-scheduling, yield-to-head, and yield-to-tail based on the vCPU over-commitment rate because the actions of vCPU scheduling are split into many single steps such as scheduling vCPUs simultaneously or inserting one vCPU into the run-queue from the head or tail. The CVS scheme can effectively improve VM performance in the low, middle, and high VM consolidation scenarios. Using real-life parallel benchmarks, our experimental results show that the proposed CVS scheme improves the overall system performance while the optimization overhead remains low.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

QoS plays a key role in evaluating a service or a service composition plan across clouds and data centers. Currently, the energy cost of a service's execution is not covered by the QoS framework, and a service's price is often fixed during its execution. However, energy consumption has a great contribution in determining the price of a cloud service. As a result, it is not reasonable if the price of a cloud service is calculated with a fixed energy consumption value, if part of a service's energy consumption could be saved during its execution. Taking advantage of the dynamic energy-Aware optimal technique, a QoS enhanced method for service computing is proposed, in this paper, through virtual machine (VM) scheduling. Technically, two typical QoS metrics, i.e., the price and the execution time are taken into consideration in our method. Moreover, our method consists of two dynamic optimal phases. The first optimal phase aims at dynamically benefiting a user with discount price by transparently migrating his or her task execution from a VM located at a server with high energy consumption to a low one. The second optimal phase aims at shortening task's execution time, through transparently migrating a task execution from a VM to another one located at a server with higher performance. Experimental evaluation upon large scale service computing across clouds demonstrates the validity of our method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A cloud workflow system is a type of platform service which facilitates the automation of distributed applications based on the novel cloud infrastructure. One of the most important aspects which differentiate a cloud workflow system from its other counterparts is the market-oriented business model. This is a significant innovation which brings many challenges to conventional workflow scheduling strategies. To investigate such an issue, this paper proposes a market-oriented hierarchical scheduling strategy in cloud workflow systems. Specifically, the service-level scheduling deals with the Task-to-Service assignment where tasks of individual workflow instances are mapped to cloud services in the global cloud markets based on their functional and non-functional QoS requirements; the task-level scheduling deals with the optimisation of the Task-to-VM (virtual machine) assignment in local cloud data centres where the overall running cost of cloud workflow systems will be minimised given the satisfaction of QoS constraints for individual tasks. Based on our hierarchical scheduling strategy, a package based random scheduling algorithm is presented as the candidate service-level scheduling algorithm and three representative metaheuristic based scheduling algorithms including genetic algorithm (GA), ant colony optimisation (ACO), and particle swarm optimisation (PSO) are adapted, implemented and analysed as the candidate task-level scheduling algorithms. The hierarchical scheduling strategy is being implemented in our SwinDeW-C cloud workflow system and demonstrating satisfactory performance. Meanwhile, the experimental results show that the overall performance of ACO based scheduling algorithm is better than others on three basic measurements: the optimisation rate on makespan, the optimisation rate on cost and the CPU time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Haptic human-machine interfaces and interaction techniques have been shown to offer advantages over conventional approaches. This work introduces the 3D virtual haptic cone with the aim of improving human remote control of a vehicle's motion. The 3D cone introduces a third dimension to the haptic control surface over existing approaches. This approach improves upon existing methods by providing the human operator with an intuitive method for issuing vehicle motion commands whilst simultaneously receiving real-time haptic information from the remote system. The presented approach offers potential across many applications, and as a case study, this work considers the approach in the context of mobile robot motion control. The performance of the approach in providing the operator with improved motion controllability is evaluated and the performance improvement determined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Haptic technology provides the ability for a system to recreate the sense of touch to a human operator, and as such offers wide reaching advantages. The ability to interact with the human's tactual modality introduces haptic human-machine interaction to replace or augment existing mediums such as visual and audible information. A distinct advantage of haptic human-machine interaction is the intrinsic bilateral nature, where information can be communicated in both directions simultaneously. This paper investigates the bilateral nature of the haptic interface in controlling the motion of a remote (or virtual) vehicle and presents the ability to provide an additional dimension of haptic information to the user over existing approaches [1-4]. The 3D virtual haptic cone offers the ability to not only provide the user with relevant haptic augmentation pertaining to the task at hand, as do existing approaches, however, to also simultaneously provide an intuitive indication of the current velocities being commanded.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a Computational Virtual Reality Environment for Anesthesia (CVREA) is proposed. Virtual reality, data mining, machine learning techniques will be explored to develop (1) an immersive and interactive training platform for anaesthetists, which can greatly improve their training and learning performance; (2) a knowledge learning environment which collects clinical data with greater richness, process data with more efficacy, and facilitate knowledge discovery in anaesthesiology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Robots are ever increasing in a variety of different workplaces providing an array of benefits such alternative solutions to traditional human labor. While developing fully autonomous robots is the ultimate goal in many robotic applications the reality is that there still exist many situationswere robots require some level of teleoperation in order to achieve assigned goals especially when deployed in non-deterministic environments. For instance teleoperation is commonly used in areas such as search and rescue, bomb disposal and exploration of inaccessible or harsh terrain. This is due to a range of factors such as the lack of ability for robots to quickly and reliably navigate unknown environments or provide high-level decision making especially intime critical tasks. To provide an adequate solution for such situations human-in-the-loop control is required. When developing human-in-the-loop control it is important to take advantage of the complimentary skill-sets that both humans and robots share. For example robots can performrapid calculations, provide accurate measurements through hardware such as sensors and store large amounts of data while humans provide experience, intuition, risk management and complex decision making capabilities. Shared autonomy is the concept of building robotic systems that take advantage of these complementary skills-sets to provide a robust an efficient robotic solution. While the requirement of human-in-the-loop control exists Human Machine Interaction (HMI) remains an important research topic especially the area of User Interface (UI) design.In order to provide operators with an effective teleoperation system it is important that the interface is intuitive and dynamic while also achieving a high level of immersion. Recent advancements in virtual and augmented reality hardware is giving rise to innovative HMI systems. Interactive hardware such as Microsoft Kinect, leap motion, Oculus Rift, Samsung Gear VR and even CAVE Automatic Virtual Environments [1] are providing vast improvements over traditional user interface designs such as the experimental web browser JanusVR [2]. This combined with the introduction of standardized robot frameworks such as ROS and Webots [3] that now support a large number of different robots provides an opportunity to develop a universal UI for teleoperation control to improve operator efficiency while reducing teleoperation training.This research introduces the concept of a dynamic virtual workspace for teleoperation of heterogeneous robots in non-deterministic environments that require human-in-the-loop control. The system first identifies the connected robots through the use kinematic information then determines its network capabilities such as latency and bandwidth. Given the robot type and network capabilities the system can then provide the operator with available teleoperation modes such as pick and place control or waypoint navigation while also allowing them to manipulate the virtual workspace layout to provide information from onboard camera’s or sensors.