13 resultados para enterprise grid computing

em Universidad Politécnica de Madrid


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Infrastructure as a Service clouds are a flexible and fast way to obtain (virtual) resources as demand varies. Grids, on the other hand, are middleware platforms able to combine resources from different administrative domains for task execution. Clouds can be used by grids as providers of devices such as virtual machines, so they only use the resources they need. But this requires grids to be able to decide when to allocate and release those resources. Here we introduce and analyze by simulations an economic mechanism (a) to set resource prices and (b) resolve when to scale resources depending on the users’ demand. This system has a strong emphasis on fairness, so no user hinders the execution of other users’ tasks by getting too many resources. Our simulator is based on the well-known GridSim software for grid simulation, which we expand to simulate infrastructure clouds. The results show how the proposed system can successfully adapt the amount of allocated resources to the demand, while at the same time ensuring that resources are fairly shared among users.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Complexity has always been one of the most important issues in distributed computing. From the first clusters to grid and now cloud computing, dealing correctly and efficiently with system complexity is the key to taking technology a step further. In this sense, global behavior modeling is an innovative methodology aimed at understanding the grid behavior. The main objective of this methodology is to synthesize the grid's vast, heterogeneous nature into a simple but powerful behavior model, represented in the form of a single, abstract entity, with a global state. Global behavior modeling has proved to be very useful in effectively managing grid complexity but, in many cases, deeper knowledge is needed. It generates a descriptive model that could be greatly improved if extended not only to explain behavior, but also to predict it. In this paper we present a prediction methodology whose objective is to define the techniques needed to create global behavior prediction models for grid systems. This global behavior prediction can benefit grid management, specially in areas such as fault tolerance or job scheduling. The paper presents experimental results obtained in real scenarios in order to validate this approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Over the last decade, Grid computing paved the way for a new level of large scale distributed systems. This infrastructure made it possible to securely and reliably take advantage of widely separated computational resources that are part of several different organizations. Resources can be incorporated to the Grid, building a theoretical virtual supercomputer. In time, cloud computing emerged as a new type of large scale distributed system, inheriting and expanding the expertise and knowledge that have been obtained so far. Some of the main characteristics of Grids naturally evolved into clouds, others were modified and adapted and others were simply discarded or postponed. Regardless of these technical specifics, both Grids and clouds together can be considered as one of the most important advances in large scale distributed computing of the past ten years; however, this step in distributed computing has came along with a completely new level of complexity. Grid and cloud management mechanisms play a key role, and correct analysis and understanding of the system behavior are needed. Large scale distributed systems must be able to self-manage, incorporating autonomic features capable of controlling and optimizing all resources and services. Traditional distributed computing management mechanisms analyze each resource separately and adjust specific parameters of each one of them. When trying to adapt the same procedures to Grid and cloud computing, the vast complexity of these systems can make this task extremely complicated. But large scale distributed systems complexity could only be a matter of perspective. It could be possible to understand the Grid or cloud behavior as a single entity, instead of a set of resources. This abstraction could provide a different understanding of the system, describing large scale behavior and global events that probably would not be detected analyzing each resource separately. In this work we define a theoretical framework that combines both ideas, multiple resources and single entity, to develop large scale distributed systems management techniques aimed at system performance optimization, increased dependability and Quality of Service (QoS). The resulting synergy could be the key 350 J. Montes et al. to address the most important difficulties of Grid and cloud management.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The future Internet is expected to be composed of a mesh of interoperable web services accessible from all over the web. This approach has not yet caught on since global user?service interaction is still an open issue. This paper states one vision with regard to next-generation front-end Web 2.0 technology that will enable integrated access to services, contents and things in the future Internet. In this paper, we illustrate how front-ends that wrap traditional services and resources can be tailored to the needs of end users, converting end users into prosumers (creators and consumers of service-based applications). To do this, we propose an architecture that end users without programming skills can use to create front-ends, consult catalogues of resources tailored to their needs, easily integrate and coordinate front-ends and create composite applications to orchestrate services in their back-end. The paper includes a case study illustrating that current user-centred web development tools are at a very early stage of evolution. We provide statistical data on how the proposed architecture improves these tools. This paper is based on research conducted by the Service Front End (SFE) Open Alliance initiative.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Energy consumption in data centers is nowadays a critical objective because of its dramatic environmental and economic impact. Over the last years, several approaches have been proposed to tackle the energy/cost optimization problem, but most of them have failed on providing an analytical model to target both the static and dynamic optimization domains for complex heterogeneous data centers. This paper proposes and solves an optimization problem for the energy-driven configuration of a heterogeneous data center. It also advances in the proposition of a new mechanism for task allocation and distribution of workload. The combination of both approaches outperforms previous published results in the field of energy minimization in heterogeneous data centers and scopes a promising area of research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Quality of service (QoS) can be a critical element for achieving the business goals of a service provider, for the acceptance of a service by the user, or for guaranteeing service characteristics in a composition of services, where a service is defined as either a software or a software-support (i.e., infrastructural) service which is available on any type of network or electronic channel. The goal of this article is to compare the approaches to QoS description in the literature, where several models and metamodels are included. consider a large spectrum of models and metamodels to describe service quality, ranging from ontological approaches to define quality measures, metrics, and dimensions, to metamodels enabling the specification of quality-based service requirements and capabilities as well as of SLAs (Service-Level Agreements) and SLA templates for service provisioning. Our survey is performed by inspecting the characteristics of the available approaches to reveal which are the consolidated ones and which are the ones specific to given aspects and to analyze where the need for further research and investigation lies. The approaches here illustrated have been selected based on a systematic review of conference proceedings and journals spanning various research areas in computer science and engineering, including: distributed, information, and telecommunication systems, networks and security, and service-oriented and grid computing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work proposes an automatic methodology for modeling complex systems. Our methodology is based on the combination of Grammatical Evolution and classical regression to obtain an optimal set of features that take part of a linear and convex model. This technique provides both Feature Engineering and Symbolic Regression in order to infer accurate models with no effort or designer's expertise requirements. As advanced Cloud services are becoming mainstream, the contribution of data centers in the overall power consumption of modern cities is growing dramatically. These facilities consume from 10 to 100 times more power per square foot than typical office buildings. Modeling the power consumption for these infrastructures is crucial to anticipate the effects of aggressive optimization policies, but accurate and fast power modeling is a complex challenge for high-end servers not yet satisfied by analytical approaches. For this case study, our methodology minimizes error in power prediction. This work has been tested using real Cloud applications resulting on an average error in power estimation of 3.98%. Our work improves the possibilities of deriving Cloud energy efficient policies in Cloud data centers being applicable to other computing environments with similar characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In parallel to the effort of creating Open Linked Data for the World Wide Web there is a number of projects aimed for developing the same technologies but in the context of their usage in closed environments such as private enterprises. In the paper, we present results of research on interlinking structured data for use in Idea Management Systems - a still rare breed of knowledge management systems dedicated to innovation management. In our study, we show the process of extending an ontology that initially covers only the Idea Management System structure towards the concept of linking with distributed enterprise data and public data using Semantic Web technologies. Furthermore we point out how the established links can help to solve the key problems of contemporary Idea Management Systems

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data grid services have been used to deal with the increasing needs of applications in terms of data volume and throughput. The large scale, heterogeneity and dynamism of grid environments often make management and tuning of these data services very complex. Furthermore, current high-performance I/O approaches are characterized by their high complexity and specific features that usually require specialized administrator skills. Autonomic computing can help manage this complexity. The present paper describes an autonomic subsystem intended to provide self-management features aimed at efficiently reducing the I/O problem in a grid environment, thereby enhancing the quality of service (QoS) of data access and storage services in the grid. Our proposal takes into account that data produced in an I/O system is not usually immediately required. Therefore, performance improvements are related not only to current but also to any future I/O access, as the actual data access usually occurs later on. Nevertheless, the exact time of the next I/O operations is unknown. Thus, our approach proposes a long-term prediction designed to forecast the future workload of grid components. This enables the autonomic subsystem to determine the optimal data placement to improve both current and future I/O operations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present competitive environment, companies are wondering how to reduce their IT costs while increasing their efficiency and agility to react when changes in the business processes are required. Cloud Computing is the latest paradigm to optimize the use of IT resources considering ?everything as a service? and receiving these services from the Cloud (Internet) instead of owning and managing hardware and software assets. The benefits from the model are clear. However, there are also concerns and issues to be solved before Cloud Computing spreads across the different industries. This model will allow a pay-per-use model for the IT services and many benefits like cost savings, agility to react when business demands changes and simplicity because there will not be any infrastructure to operate and administrate. It will be comparable to the well known utilities like electricity, water or gas companies. However, this paper underlines several risk factors of the model. Leading technology companies should research on solutions to minimize the risks described in this article. Keywords - Cloud Computing, Utility Computing, Elastic Computing, Enterprise Agility

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En el marco del proyecto europeo FI-WARE, en el CoNWet Lab (laboratorio de la ETSI Informáticos de la UPM) se ha implementado la plataforma Web Wstore que es una implementación de referencia del Store Generic Enabler perteneciente a dicho proyecto. El objetivo de FI-WARE es crear la plataforma núcleo del Internet del Futuro (IoF) con la intención de incrementar la competitividad global europea en el mundo de las TI. El proyecto introduce una infraestructura innovadora para la creación y distribución de servicios digitales en internet. WStore ofrece a los proveedores de servicios la plataforma donde publicar sus ofertas y desde la cual los clientes podrán acceder ellas. Estos proveedores ofrecen servicios Web, aplicaciones, widgets y data sets del mismo modo que Google ofrece aplicaciones en la tienda online Google Play o Apple en el App Store. WStore está implementada actualmente como una plataforma Web, por lo que una organización que desee ofrecer el servicio de la store necesita instalar el software en un servidor propio y disponer de un dominio para ofrecer sus productos. El objetivo de este trabajo es migrar WStore a un entorno de computación en la nube de manera que con una única instancia se ofrezca el servicio a las organizaciones que deseen disponer de su propia plataforma, de la cual tendrán total control como si se encontrase en su propia infraestructura. Para esto se implementa una versión de WStore que será desplegada en una infraestructura cloud y ofrecida como Software as a Service. La implementación incluye una serie de módulos de código que se podrán añadir opcionalmente en el proceso de instalación si se desea que la instancia instalada sea Multitenant. Además, en este trabajo se estudian y prueban las herramientas que ofrece MongoDB para desplegar la plataforma Wstore Multitenant en una infraestructura cloud. Estas herramientas son replica sets y sharding que permiten desplegar una base de datos escalable y de alta disponibilidad. ---ABSTRACT---In the context of the European project FI-WARE, the CoNWeT Lab (IT Lab from ETSIINF UPM university) has been implemented the web platform WStore. WStore is a reference implementation of the Generic Enabler Store from FI-WARE project. The FI-WARE goal is to create the core platform of the Future Internet (IoF) with the intention of enhancing Europe's global competitiveness in IT technologies. FI-WARE introduces an innovative infrastructure for the creation and distribution of digital services over the Internet. WStore offers to service providers a platform to publicate offerings and where customers can access them. The providers offer web services, applications, widgets and data sets in the same way that Google offers online applications on Google Play or Apple on App Store plataforms. WStore is currently implemented as a web platform, so if an organization wants to offer the store service, it need to install the software on it’s own serves and have a domain to offer their products. The objective of this paper is to migrate WStore to a cloud computing environment where a single instance of the WStore is offered as a web service to organizations who want their own store. Customers (tenants) of the WStore web service will have total control over the software and WStore administration. The implementation includes several code modules that can be optionally added in the installation process to build a Multitenant instance. In addition, this paper review the tools that MongoDB provide for scalability and high availability (replica sets and sharding) with the purpose of deploying multi-tenant WStore on a cloud infrastructure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As advanced Cloud services are becoming mainstream, the contribution of data centers in the overall power consumption of modern cities is growing dramatically. The average consumption of a single data center is equivalent to the energy consumption of 25.000 households. Modeling the power consumption for these infrastructures is crucial to anticipate the effects of aggressive optimization policies, but accurate and fast power modeling is a complex challenge for high-end servers not yet satisfied by analytical approaches. This work proposes an automatic method, based on Multi-Objective Particle Swarm Optimization, for the identification of power models of enterprise servers in Cloud data centers. Our approach, as opposed to previous procedures, does not only consider the workload consolidation for deriving the power model, but also incorporates other non traditional factors like the static power consumption and its dependence with temperature. Our experimental results shows that we reach slightly better models than classical approaches, but simul- taneously simplifying the power model structure and thus the numbers of sensors needed, which is very promising for a short-term energy prediction. This work, validated with real Cloud applications, broadens the possibilities to derive efficient energy saving techniques for Cloud facilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article shows the benefits of developing a software project using TSPi in a "Very Small Enterprise" based in quality and productivity measures. An adapted process from the current process based on the TSPi was defined and the team was trained in it. The pilot project had schedule and budget constraints. The workaround began by gathering historical data from previous projects in order to get a measurement repository, and then the project metrics were collected. Finally, the process, product and quality improvements were verified