13 resultados para sistema distribuito data-grid cloud computing CERN LHC Hazelcast Elasticsearch

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Specification of the non-functional requirements of applications and determining the required resources for their execution are activities that demand a great deal of technical knowledge, frequently resulting in an inefficient use of resources. Cloud computing is an alternative for provisioning of resources, which can be done using either the provider's own infrastructure or the infrastructure of one or more public clouds, or even a combination of both. It enables more flexibly/elastic use of resources, but does not solve the specification problem. In this paper we present an approach that uses models at runtime to facilitate the specification of non-functional requirements and resources, aiming to facilitate dynamic support for application execution in cloud computing environments with shared resources. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hierarchical visualization systems are desirable because a single two-dimensional visualization plot may not be sufficient to capture all of the interesting aspects of complex high-dimensional data sets. We extend an existing locally linear hierarchical visualization system PhiVis [1] in several directions: bf(1) we allow for em non-linear projection manifolds (the basic building block is the Generative Topographic Mapping -- GTM), bf(2) we introduce a general formulation of hierarchical probabilistic models consisting of local probabilistic models organized in a hierarchical tree, bf(3) we describe folding patterns of low-dimensional projection manifold in high-dimensional data space by computing and visualizing the manifold's local directional curvatures. Quantities such as magnification factors [3] and directional curvatures are helpful for understanding the layout of the nonlinear projection manifold in the data space and for further refinement of the hierarchical visualization plot. Like PhiVis, our system is statistically principled and is built interactively in a top-down fashion using the EM algorithm. We demonstrate the visualization system principle of the approach on a complex 12-dimensional data set and mention possible applications in the pharmaceutical industry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A recent novel approach to the visualisation and analysis of datasets, and one which is particularly applicable to those of a high dimension, is discussed in the context of real applications. A feed-forward neural network is utilised to effect a topographic, structure-preserving, dimension-reducing transformation of the data, with an additional facility to incorporate different degrees of associated subjective information. The properties of this transformation are illustrated on synthetic and real datasets, including the 1992 UK Research Assessment Exercise for funding in higher education. The method is compared and contrasted to established techniques for feature extraction, and related to topographic mappings, the Sammon projection and the statistical field of multidimensional scaling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Technological advancements enable new sourcing models in software development such as cloud computing, software-as-a-service, and crowdsourcing. While the first two are perceived as a re-emergence of older models (e.g., ASP), crowdsourcing is a new model that creates an opportunity for a global workforce to compete with established service providers. Organizations engaging in crowdsourcing need to develop the capabilities to successfully utilize this sourcing model in delivering services to their clients. To explore these capabilities we collected qualitative data from focus groups with crowdsourcing leaders at a large technology organization. New capabilities we identified stem from the need of the traditional service provider to assume a "client" role in the crowdsourcing context, while still acting as a "vendor" in providing services to the end client. This paper expands the research on vendor capabilities and IS outsourcing as well as offers important insights to organizations that are experimenting with, or considering, crowdsourcing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The world is connected by a core network of long-haul optical communication systems that link countries and continents, enabling long-distance phone calls, data-center communications, and the Internet. The demands on information rates have been constantly driven up by applications such as online gaming, high-definition video, and cloud computing. All over the world, end-user connection speeds are being increased by replacing conventional digital subscriber line (DSL) and asymmetric DSL (ADSL) with fiber to the home. Clearly, the capacity of the core network must also increase proportionally. © 1991-2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The enormous potential of cloud computing for improved and cost-effective service has generated unprecedented interest in its adoption. However, a potential cloud user faces numerous risks regarding service requirements, cost implications of failure and uncertainty about cloud providers' ability to meet service level agreements. These risks hinder the adoption of cloud. We extend the work on goal-oriented requirements engineering (GORE) and obstacles for informing the adoption process. We argue that obstacles prioritisation and their resolution is core to mitigating risks in the adoption process. We propose a novel systematic method for prioritising obstacles and their resolution tactics using Analytical Hierarchy Process (AHP). We provide an example to demonstrate the applicability and effectiveness of the approach. To assess the AHP choice of the resolution tactics we support the method by stability and sensitivity analysis. Copyright 2014 ACM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Work on human self-Awareness is the basis for a framework to develop computational systems that can adaptively manage complex dynamic tradeoffs at runtime. An architectural case study in cloud computing illustrates the framework's potential benefits.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing is a new technological paradigm offering computing infrastructure, software and platforms as a pay-as-you-go, subscription-based service. Many potential customers of cloud services require essential cost assessments to be undertaken before transitioning to the cloud. Current assessment techniques are imprecise as they rely on simplified specifications of resource requirements that fail to account for probabilistic variations in usage. In this paper, we address these problems and propose a new probabilistic pattern modelling (PPM) approach to cloud costing and resource usage verification. Our approach is based on a concise expression of probabilistic resource usage patterns translated to Markov decision processes (MDPs). Key costing and usage queries are identified and expressed in a probabilistic variant of temporal logic and calculated to a high degree of precision using quantitative verification techniques. The PPM cost assessment approach has been implemented as a Java library and validated with a case study and scalability experiments. © 2012 Springer-Verlag Berlin Heidelberg.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Continuous progress in optical communication technology and corresponding increasing data rates in core fiber communication systems are stimulated by the evergrowing capacity demand due to constantly emerging new bandwidth-hungry services like cloud computing, ultra-high-definition video streams, etc. This demand is pushing the required capacity of optical communication lines close to the theoretical limit of a standard single-mode fiber, which is imposed by Kerr nonlinearity [1–4]. In recent years, there have been extensive efforts in mitigating the detrimental impact of fiber nonlinearity on signal transmission, through various compensation techniques. However, there are still many challenges in applying these methods, because a majority of technologies utilized in the inherently nonlinear fiber communication systems had been originally developed for linear communication channels. Thereby, the application of ”linear techniques” in a fiber communication systems is inevitably limited by the nonlinear properties of the fiber medium. The quest for the optimal design of a nonlinear transmission channels, development of nonlinear communication technqiues and the usage of nonlinearity in a“constructive” way have occupied researchers for quite a long time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To benefit from the advantages that Cloud Computing brings to the IT industry, management policies must be implemented as a part of the operation of the Cloud. Among others, for example, the specification of policies can be used for the management of energy to reduce the cost of running the IT system or also for security policies while handling privacy issues of users. As cloud platforms are large, manual enforcement of policies is not scalable. Hence, autonomic approaches for management policies have recently received a considerable attention. These approaches allow specification of rules that are executed via rule-engines. The process of rules creation starts by the interpretation of the policies drafted by high-rank managers. Then, technical IT staff translate such policies to operational activities to implement them. Such process can start from a textual declarative description and after numerous steps terminates in a set of rules to be executed on a rule engine. To simplify the steps and to bridge the considerable gap between the declarative policies and executable rules, we propose a domain-specific language called CloudMPL. We also design a method of automated transformation of the rules captured in CloudMPL to the popular rule-engine Drools. As the policies are changed over time, code generation will reduce the time required for the implementation of the policies. In addition, using a declarative language for writing the specifications is expected to make the authoring of rules easier. We demonstrate the use of the CloudMPL language into a running example extracted from a management energy consumption case study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Volunteered Service Composition (VSC) refers to the process of composing volunteered services and resources. These services are typically published to a pool of voluntary resources. The composition aims at satisfying some objectives (e.g. Utilizing storage and eliminating waste, sharing space and optimizing for energy, reducing computational cost etc.). In cases when a single volunteered service does not satisfy a request, VSC will be required. In this paper, we contribute to three approaches for composing volunteered services: these are exhaustive, naïve and utility-based search approach to VSC. The proposed new utility-based approach, for instance, is based on measuring the utility that each volunteered service can provide to each request and systematically selects the one with the highest utility. We found that the utility-based approach tend to be more effective and efficient when selecting services, while minimizing resource waste when compared to the other two approaches.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In this paper we evaluate and compare two representativeand popular distributed processing engines for large scalebig data analytics, Spark and graph based engine GraphLab. Wedesign a benchmark suite including representative algorithmsand datasets to compare the performances of the computingengines, from performance aspects of running time, memory andCPU usage, network and I/O overhead. The benchmark suite istested on both local computer cluster and virtual machines oncloud. By varying the number of computers and memory weexamine the scalability of the computing engines with increasingcomputing resources (such as CPU and memory). We also runcross-evaluation of generic and graph based analytic algorithmsover graph processing and generic platforms to identify thepotential performance degradation if only one processing engineis available. It is observed that both computing engines showgood scalability with increase of computing resources. WhileGraphLab largely outperforms Spark for graph algorithms, ithas close running time performance as Spark for non-graphalgorithms. Additionally the running time with Spark for graphalgorithms over cloud virtual machines is observed to increaseby almost 100% compared to over local computer clusters.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

IEEE 802.15.4 networks (also known as ZigBee networks) has the features of low data rate and low power consumption. In this paper we propose an adaptive data transmission scheme which is based on CSMA/CA access control scheme, for applications which may have heavy traffic loads such as smart grids. In the proposed scheme, the personal area network (PAN) coordinator will adaptively broadcast a frame length threshold, which is used by the sensors to make decision whether a data frame should be transmitted directly to the target destinations, or follow a short data request frame. If the data frame is long and prone to collision, use of a short data request frame can efficiently reduce the costs of the potential collision on the energy and bandwidth. Simulation results demonstrate the effectiveness of the proposed scheme with largely improve bandwidth and power efficiency. © 2011 Springer-Verlag.