28 resultados para Service-oriented grid computing
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Content-centric networking is a novel paradigm for the Future Internet that treats content as a first class citizen. This paper argues that content-centric networking should be generalized towards a service-centric networking scheme. We propose a service-centric networking design based on an object-oriented approach, in which content and services are considered objects. We show implementation architectures for example services and how these can benefit from service-oriented networking.
Resumo:
Integrating physical objects (smart objects) and enterprise IT systems is still a labor intensive, mainly manual task done by domain experts. On one hand, enterprise IT backend systems are based on service oriented architectures (SOA) and driven by business rule engines or business process execution engines. Smart objects on the other hand are often programmed at very low levels. In this paper we describe an approach that makes the integration of smart objects with such backends systems easier. We introduce semantic endpoint descriptions based on Linked USDL. Furthermore, we show how different communication patterns can be integrated into these endpoint descriptions. The strength of our endpoint descriptions is that they can be used to automatically create REST or SOAP endpoints for enterprise systems, even if which they are not able to talk to the smart objects directly. We evaluate our proposed solution with CoAP, UDP and 6LoWPAN, as we anticipate the industry converge towards these standards. Nonetheless, our approach also allows easy integration with backend systems, even if no standardized protocol is used.
Resumo:
Because of the unknown usage scenarios, designing the elementary services of a service-oriented architecture (SOA), which form the basis for later composition, is rather difficult. Various design guide lines have been proposed by academia, tool vendors and consulting companies, but they differ in the rigor of validation and are often biased toward some technology. For that reason a multiple-case study was conducted in five large organizations that successfully introduced SOA in their daily business. The observed approaches are contrasted with the findings from a literature review to derive some recommendations for SOA service design.
Resumo:
How do developers and designers of a new technology make sense of intended users? The critical groundwork for user-centred technology development begins not by involving actual users' exposure to the technological artefact but much earlier, with designers' and developers' vision of future users. Thus, anticipating intended users is critical to technology uptake. We conceptualise the anticipation of intended users as a form of prospective sensemaking in technology development. Employing a narrative analytical approach and drawing on four key communities in the development of Grid computing, we reconstruct how each community anticipated the intended Grid user. Based on our findings, we conceptualise user anticipation in Terms of two key dimensions, namely the intended possibility to inscribe user needs into the technological artefact as well as the intended scope of the application domain. In turn, these dimensions allow us to develop an initial typology of intended user concepts that in turn might provide a key building block towards a generic typology of intended users.
Resumo:
Cloud Computing enables provisioning and distribution of highly scalable services in a reliable, on-demand and sustainable manner. However, objectives of managing enterprise distributed applications in cloud environments under Service Level Agreement (SLA) constraints lead to challenges for maintaining optimal resource control. Furthermore, conflicting objectives in management of cloud infrastructure and distributed applications might lead to violations of SLAs and inefficient use of hardware and software resources. This dissertation focusses on how SLAs can be used as an input to the cloud management system, increasing the efficiency of allocating resources, as well as that of infrastructure scaling. First, we present an extended SLA semantic model for modelling complex service-dependencies in distributed applications, and for enabling automated cloud infrastructure management operations. Second, we describe a multi-objective VM allocation algorithm for optimised resource allocation in infrastructure clouds. Third, we describe a method of discovering relations between the performance indicators of services belonging to distributed applications and then using these relations for building scaling rules that a CMS can use for automated management of VMs. Fourth, we introduce two novel VM-scaling algorithms, which optimally scale systems composed of VMs, based on given SLA performance constraints. All presented research works were implemented and tested using enterprise distributed applications.
Resumo:
Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.
Resumo:
Context-dependent behavior is becoming increasingly important for a wide range of application domains, from pervasive computing to common business applications. Unfortunately, mainstream programming languages do not provide mechanisms that enable software entities to adapt their behavior dynamically to the current execution context. This leads developers to adopt convoluted designs to achieve the necessary runtime flexibility. We propose a new programming technique called Context-oriented Programming (COP) which addresses this problem. COP treats context explicitly, and provides mechanisms to dynamically adapt behavior in reaction to changes in context, even after system deployment at runtime. In this paper we lay the foundations of COP, show how dynamic layer activation enables multi-dimensional dispatch, illustrate the application of COP by examples in several language extensions, and demonstrate that COP is largely independent of other commitments to programming style.