33 resultados para cloud computing datacenter performance QoS
em Aston University Research Archive
Resumo:
Specification of the non-functional requirements of applications and determining the required resources for their execution are activities that demand a great deal of technical knowledge, frequently resulting in an inefficient use of resources. Cloud computing is an alternative for provisioning of resources, which can be done using either the provider's own infrastructure or the infrastructure of one or more public clouds, or even a combination of both. It enables more flexibly/elastic use of resources, but does not solve the specification problem. In this paper we present an approach that uses models at runtime to facilitate the specification of non-functional requirements and resources, aiming to facilitate dynamic support for application execution in cloud computing environments with shared resources. © 2013 IEEE.
Resumo:
The enormous potential of cloud computing for improved and cost-effective service has generated unprecedented interest in its adoption. However, a potential cloud user faces numerous risks regarding service requirements, cost implications of failure and uncertainty about cloud providers' ability to meet service level agreements. These risks hinder the adoption of cloud. We extend the work on goal-oriented requirements engineering (GORE) and obstacles for informing the adoption process. We argue that obstacles prioritisation and their resolution is core to mitigating risks in the adoption process. We propose a novel systematic method for prioritising obstacles and their resolution tactics using Analytical Hierarchy Process (AHP). We provide an example to demonstrate the applicability and effectiveness of the approach. To assess the AHP choice of the resolution tactics we support the method by stability and sensitivity analysis. Copyright 2014 ACM.
Resumo:
Work on human self-Awareness is the basis for a framework to develop computational systems that can adaptively manage complex dynamic tradeoffs at runtime. An architectural case study in cloud computing illustrates the framework's potential benefits.
Resumo:
Cloud computing is a new technological paradigm offering computing infrastructure, software and platforms as a pay-as-you-go, subscription-based service. Many potential customers of cloud services require essential cost assessments to be undertaken before transitioning to the cloud. Current assessment techniques are imprecise as they rely on simplified specifications of resource requirements that fail to account for probabilistic variations in usage. In this paper, we address these problems and propose a new probabilistic pattern modelling (PPM) approach to cloud costing and resource usage verification. Our approach is based on a concise expression of probabilistic resource usage patterns translated to Markov decision processes (MDPs). Key costing and usage queries are identified and expressed in a probabilistic variant of temporal logic and calculated to a high degree of precision using quantitative verification techniques. The PPM cost assessment approach has been implemented as a Java library and validated with a case study and scalability experiments. © 2012 Springer-Verlag Berlin Heidelberg.
Resumo:
To benefit from the advantages that Cloud Computing brings to the IT industry, management policies must be implemented as a part of the operation of the Cloud. Among others, for example, the specification of policies can be used for the management of energy to reduce the cost of running the IT system or also for security policies while handling privacy issues of users. As cloud platforms are large, manual enforcement of policies is not scalable. Hence, autonomic approaches for management policies have recently received a considerable attention. These approaches allow specification of rules that are executed via rule-engines. The process of rules creation starts by the interpretation of the policies drafted by high-rank managers. Then, technical IT staff translate such policies to operational activities to implement them. Such process can start from a textual declarative description and after numerous steps terminates in a set of rules to be executed on a rule engine. To simplify the steps and to bridge the considerable gap between the declarative policies and executable rules, we propose a domain-specific language called CloudMPL. We also design a method of automated transformation of the rules captured in CloudMPL to the popular rule-engine Drools. As the policies are changed over time, code generation will reduce the time required for the implementation of the policies. In addition, using a declarative language for writing the specifications is expected to make the authoring of rules easier. We demonstrate the use of the CloudMPL language into a running example extracted from a management energy consumption case study.
Resumo:
Technological advancements enable new sourcing models in software development such as cloud computing, software-as-a-service, and crowdsourcing. While the first two are perceived as a re-emergence of older models (e.g., ASP), crowdsourcing is a new model that creates an opportunity for a global workforce to compete with established service providers. Organizations engaging in crowdsourcing need to develop the capabilities to successfully utilize this sourcing model in delivering services to their clients. To explore these capabilities we collected qualitative data from focus groups with crowdsourcing leaders at a large technology organization. New capabilities we identified stem from the need of the traditional service provider to assume a "client" role in the crowdsourcing context, while still acting as a "vendor" in providing services to the end client. This paper expands the research on vendor capabilities and IS outsourcing as well as offers important insights to organizations that are experimenting with, or considering, crowdsourcing.
Resumo:
The world is connected by a core network of long-haul optical communication systems that link countries and continents, enabling long-distance phone calls, data-center communications, and the Internet. The demands on information rates have been constantly driven up by applications such as online gaming, high-definition video, and cloud computing. All over the world, end-user connection speeds are being increased by replacing conventional digital subscriber line (DSL) and asymmetric DSL (ADSL) with fiber to the home. Clearly, the capacity of the core network must also increase proportionally. © 1991-2012 IEEE.
Resumo:
Volunteered Service Composition (VSC) refers to the process of composing volunteered services and resources. These services are typically published to a pool of voluntary resources. The composition aims at satisfying some objectives (e.g. Utilizing storage and eliminating waste, sharing space and optimizing for energy, reducing computational cost etc.). In cases when a single volunteered service does not satisfy a request, VSC will be required. In this paper, we contribute to three approaches for composing volunteered services: these are exhaustive, naïve and utility-based search approach to VSC. The proposed new utility-based approach, for instance, is based on measuring the utility that each volunteered service can provide to each request and systematically selects the one with the highest utility. We found that the utility-based approach tend to be more effective and efficient when selecting services, while minimizing resource waste when compared to the other two approaches.
Resumo:
Continuous progress in optical communication technology and corresponding increasing data rates in core fiber communication systems are stimulated by the evergrowing capacity demand due to constantly emerging new bandwidth-hungry services like cloud computing, ultra-high-definition video streams, etc. This demand is pushing the required capacity of optical communication lines close to the theoretical limit of a standard single-mode fiber, which is imposed by Kerr nonlinearity [1–4]. In recent years, there have been extensive efforts in mitigating the detrimental impact of fiber nonlinearity on signal transmission, through various compensation techniques. However, there are still many challenges in applying these methods, because a majority of technologies utilized in the inherently nonlinear fiber communication systems had been originally developed for linear communication channels. Thereby, the application of ”linear techniques” in a fiber communication systems is inevitably limited by the nonlinear properties of the fiber medium. The quest for the optimal design of a nonlinear transmission channels, development of nonlinear communication technqiues and the usage of nonlinearity in a“constructive” way have occupied researchers for quite a long time.
Resumo:
Dedicated short range communications (DSRC) has been regarded as one of the most promising technologies to provide robust communications for large scale vehicle networks. It is designed to support both road safety and commercial applications. Road safety applications will require reliable and timely wireless communications. However, as the medium access control (MAC) layer of DSRC is based on the IEEE 802.11 distributed coordination function (DCF), it is well known that the random channel access based MAC cannot provide guaranteed quality of services (QoS). It is very important to understand the quantitative performance of DSRC, in order to make better decisions on its adoption, control, adaptation, and improvement. In this paper, we propose an analytic model to evaluate the DSRC-based inter-vehicle communication. We investigate the impacts of the channel access parameters associated with the different services including arbitration inter-frame space (AIFS) and contention window (CW). Based on the proposed model, we analyze the successful message delivery ratio and channel service delay for broadcast messages. The proposed analytical model can provide a convenient tool to evaluate the inter-vehicle safety applications and analyze the suitability of DSRC for road safety applications.
Resumo:
In this paper we evaluate and compare two representativeand popular distributed processing engines for large scalebig data analytics, Spark and graph based engine GraphLab. Wedesign a benchmark suite including representative algorithmsand datasets to compare the performances of the computingengines, from performance aspects of running time, memory andCPU usage, network and I/O overhead. The benchmark suite istested on both local computer cluster and virtual machines oncloud. By varying the number of computers and memory weexamine the scalability of the computing engines with increasingcomputing resources (such as CPU and memory). We also runcross-evaluation of generic and graph based analytic algorithmsover graph processing and generic platforms to identify thepotential performance degradation if only one processing engineis available. It is observed that both computing engines showgood scalability with increase of computing resources. WhileGraphLab largely outperforms Spark for graph algorithms, ithas close running time performance as Spark for non-graphalgorithms. Additionally the running time with Spark for graphalgorithms over cloud virtual machines is observed to increaseby almost 100% compared to over local computer clusters.
Resumo:
Ad hoc wireless sensor networks (WSNs) are formed from self-organising configurations of distributed, energy constrained, autonomous sensor nodes. The service lifetime of such sensor nodes depends on the power supply and the energy consumption, which is typically dominated by the communication subsystem. One of the key challenges in unlocking the potential of such data gathering sensor networks is conserving energy so as to maximize their post deployment active lifetime. This thesis described the research carried on the continual development of the novel energy efficient Optimised grids algorithm that increases the WSNs lifetime and improves on the QoS parameters yielding higher throughput, lower latency and jitter for next generation of WSNs. Based on the range and traffic relationship the novel Optimised grids algorithm provides a robust traffic dependent energy efficient grid size that minimises the cluster head energy consumption in each grid and balances the energy use throughout the network. Efficient spatial reusability allows the novel Optimised grids algorithm improves on network QoS parameters. The most important advantage of this model is that it can be applied to all one and two dimensional traffic scenarios where the traffic load may fluctuate due to sensor activities. During traffic fluctuations the novel Optimised grids algorithm can be used to re-optimise the wireless sensor network to bring further benefits in energy reduction and improvement in QoS parameters. As the idle energy becomes dominant at lower traffic loads, the new Sleep Optimised grids model incorporates the sleep energy and idle energy duty cycles that can be implemented to achieve further network lifetime gains in all wireless sensor network models. Another key advantage of the novel Optimised grids algorithm is that it can be implemented with existing energy saving protocols like GAF, LEACH, SMAC and TMAC to further enhance the network lifetimes and improve on QoS parameters. The novel Optimised grids algorithm does not interfere with these protocols, but creates an overlay to optimise the grids sizes and hence transmission range of wireless sensor nodes.
Resumo:
Computing circuits composed of noisy logical gates and their ability to represent arbitrary Boolean functions with a given level of error are investigated within a statistical mechanics setting. Existing bounds on their performance are straightforwardly retrieved, generalized, and identified as the corresponding typical-case phase transitions. Results on error rates, function depth, and sensitivity, and their dependence on the gate-type and noise model used are also obtained.
Resumo:
The main objective of the work presented in this thesis is to investigate the two sides of the flute, the face and the heel of a twist drill. The flute face was designed to yield straight diametral lips which could be extended to eliminate the chisel edge, and consequently a single cutting edge will be obtained. Since drill rigidity and space for chip conveyance have to be a compromise a theoretical expression is deduced which enables optimum chip disposal capacity to be described in terms of drill parameters. This expression is used to describe the flute heel side. Another main objective is to study the effect on drill performance of changing the conventional drill flute. Drills were manufactured according to the new flute design. Tests were run in order to compare the performance of a conventional flute drill and non conventional design put forward. The results showed that 50% reduction in thrust force and approximately 18% reduction in torque were attained for the new design. The flank wear was measured at the outer corner and found to be less for the new design drill than for the conventional one in the majority of cases. Hole quality, roundness, size and roughness were also considered as a further aspect of drill performance. Improvement in hole quality is shown to arise under certain cutting conditions. Accordingly it might be possible to use a hole which is produced in one pass of the new drill which previously would have required a drilled and reamed hole. A subsidiary objective is to design the form milling cutter that should be employed for milling the foregoing special flute from drill blank allowing for the interference effect. A mathematical analysis in conjunction with computing technique and computers is used. To control the grinding parameter, a prototype drill grinder was designed and built upon the framework of an existing cincinnati cutter grinder. The design and build of the new grinder is based on a computer aided drill point geometry analysis. In addition to the conical grinding concept, the new grinder is also used to produce spherical point utilizing a computer aided drill point geometry analysis.
Resumo:
Service-based systems that are dynamically composed at run time to provide complex, adaptive functionality are currently one of the main development paradigms in software engineering. However, the Quality of Service (QoS) delivered by these systems remains an important concern, and needs to be managed in an equally adaptive and predictable way. To address this need, we introduce a novel, tool-supported framework for the development of adaptive service-based systems called QoSMOS (QoS Management and Optimisation of Service-based systems). QoSMOS can be used to develop service-based systems that achieve their QoS requirements through dynamically adapting to changes in the system state, environment and workload. QoSMOS service-based systems translate high-level QoS requirements specified by their administrators into probabilistic temporal logic formulae, which are then formally and automatically analysed to identify and enforce optimal system configurations. The QoSMOS self-adaptation mechanism can handle reliability- and performance-related QoS requirements, and can be integrated into newly developed solutions or legacy systems. The effectiveness and scalability of the approach are validated using simulations and a set of experiments based on an implementation of an adaptive service-based system for remote medical assistance.