721 resultados para cloud computing, software as a service, SaaS, enterprise systems, IS success
Resumo:
A working paper for discussion
Resumo:
We introduce Collocation Games as the basis of a general framework for modeling, analyzing, and facilitating the interactions between the various stakeholders in distributed systems in general, and in cloud computing environments in particular. Cloud computing enables fixed-capacity (processing, communication, and storage) resources to be offered by infrastructure providers as commodities for sale at a fixed cost in an open marketplace to independent, rational parties (players) interested in setting up their own applications over the Internet. Virtualization technologies enable the partitioning of such fixed-capacity resources so as to allow each player to dynamically acquire appropriate fractions of the resources for unencumbered use. In such a paradigm, the resource management problem reduces to that of partitioning the entire set of applications (players) into subsets, each of which is assigned to fixed-capacity cloud resources. If the infrastructure and the various applications are under a single administrative domain, this partitioning reduces to an optimization problem whose objective is to minimize the overall deployment cost. In a marketplace, in which the infrastructure provider is interested in maximizing its own profit, and in which each player is interested in minimizing its own cost, it should be evident that a global optimization is precisely the wrong framework. Rather, in this paper we use a game-theoretic framework in which the assignment of players to fixed-capacity resources is the outcome of a strategic "Collocation Game". Although we show that determining the existence of an equilibrium for collocation games in general is NP-hard, we present a number of simplified, practically-motivated variants of the collocation game for which we establish convergence to a Nash Equilibrium, and for which we derive convergence and price of anarchy bounds. In addition to these analytical results, we present an experimental evaluation of implementations of some of these variants for cloud infrastructures consisting of a collection of multidimensional resources of homogeneous or heterogeneous capacities. Experimental results using trace-driven simulations and synthetically generated datasets corroborate our analytical results and also illustrate how collocation games offer a feasible distributed resource management alternative for autonomic/self-organizing systems, in which the adoption of a global optimization approach (centralized or distributed) would be neither practical nor justifiable.
Resumo:
Emerging healthcare applications can benefit enormously from recent advances in pervasive technology and computing. This paper introduces the CLARITY Modular Ambient Health and Wellness Measurement Platform:, which is a heterogeneous and robust pervasive healthcare solution currently under development at the CLARITY Center for Sensor Web Technologies. This intelligent and context-aware platform comprises the Tyndall Wireless Sensor Network prototyping system, augmented with an agent-based middleware and frontend computing architecture. The key contribution of this work is to highlight how interoperability, expandability, reusability and robustness can be manifested in the modular design of the constituent nodes and the inherently distributed nature of the controlling software architecture.Emerging healthcare applications can benefit enormously from recent advances in pervasive technology and computing. This paper introduces the CLARITY Modular Ambient Health and Wellness Measurement Platform:, which is a heterogeneous and robust pervasive healthcare solution currently under development at the CLARITY Center for Sensor Web Technologies. This intelligent and context-aware platform comprises the Tyndall Wireless Sensor Network prototyping system, augmented with an agent-based middleware and frontend computing architecture. The key contribution of this work is to highlight how interoperability, expandability, reusability and robustness can be manifested in the modular design of the constituent nodes and the inherently distributed nature of the controlling software architecture.
Resumo:
High volumes of data traffic along with bandwidth hungry applications, such as cloud computing and video on demand, is driving the core optical communication links closer and closer to their maximum capacity. The research community has clearly identifying the coming approach of the nonlinear Shannon limit for standard single mode fibre [1,2]. It is in this context that the work on modulation formats, contained in Chapter 3 of this thesis, was undertaken. The work investigates the proposed energy-efficient four-dimensional modulation formats. The work begins by studying a new visualisation technique for four dimensional modulation formats, akin to constellation diagrams. The work then carries out one of the first implementations of one such modulation format, polarisation-switched quadrature phase-shift keying (PS-QPSK). This thesis also studies two potential next-generation fibres, few-mode and hollow-core photonic band-gap fibre. Chapter 4 studies ways to experimentally quantify the nonlinearities in few-mode fibre and assess the potential benefits and limitations of such fibres. It carries out detailed experiments to measure the effects of stimulated Brillouin scattering, self-phase modulation and four-wave mixing and compares the results to numerical models, along with capacity limit calculations. Chapter 5 investigates hollow-core photonic band-gap fibre, where such fibres are predicted to have a low-loss minima at a wavelength of 2μm. To benefit from this potential low loss window requires the development of telecoms grade subsystems and components. The chapter will outline some of the development and characterisation of these components. The world's first wavelength division multiplexed (WDM) subsystem directly implemented at 2μm is presented along with WDM transmission over hollow-core photonic band-gap fibre at 2μm. References: [1]P. P. Mitra, J. B. Stark, Nature, 411, 1027-1030, 2001 [2] A. D. Ellis et al., JLT, 28, 423-433, 2010.
Resumo:
Open environments involve distributed entities interacting with each other in an open manner. Many distributed entities are unknown to each other but need to collaborate and share resources in a secure fashion. Usually resource owners alone decide who is trusted to access their resources. Since resource owners in open environments do not have a complete picture of all trusted entities, trust management frameworks are used to ensure that only authorized entities will access requested resources. Every trust management system has limitations, and the limitations can be exploited by malicious entities. One vulnerability is due to the lack of globally unique interpretation for permission specifications. This limitation means that a malicious entity which receives a permission in one domain may misuse the permission in another domain via some deceptive but apparently authorized route; this malicious behaviour is called subterfuge. This thesis develops a secure approach, Subterfuge Safe Trust Management (SSTM), that prevents subterfuge by malicious entities. SSTM employs the Subterfuge Safe Authorization Language (SSAL) which uses the idea of a local permission with a globally unique interpretation (localPermission) to resolve the misinterpretation of permissions. We model and implement SSAL with an ontology-based approach, SSALO, which provides a generic representation for knowledge related to the SSAL-based security policy. SSALO enables integration of heterogeneous security policies which is useful for secure cooperation among principals in open environments where each principal may have a different security policy with different implementation. The other advantage of an ontology-based approach is the Open World Assumption, whereby reasoning over an existing security policy is easily extended to include further security policies that might be discovered in an open distributed environment. We add two extra SSAL rules to support dynamic coalition formation and secure cooperation among coalitions. Secure federation of cloud computing platforms and secure federation of XMPP servers are presented as case studies of SSTM. The results show that SSTM provides robust accountability for the use of permissions in federation. It is also shown that SSAL is a suitable policy language to express the subterfuge-safe policy statements due to its well-defined semantics, ease of use, and integrability.
Resumo:
Realizing scalable performance on high performance computing systems is not straightforward for single-phenomenon codes (such as computational fluid dynamics [CFD]). This task is magnified considerably when the target software involves the interactions of a range of phenomena that have distinctive solution procedures involving different discretization methods. The problems of addressing the key issues of retaining data integrity and the ordering of the calculation procedures are significant. A strategy for parallelizing this multiphysics family of codes is described for software exploiting finite-volume discretization methods on unstructured meshes using iterative solution procedures. A mesh partitioning-based SPMD approach is used. However, since different variables use distinct discretization schemes, this means that distinct partitions are required; techniques for addressing this issue are described using the mesh-partitioning tool, JOSTLE. In this contribution, the strategy is tested for a variety of test cases under a wide range of conditions (e.g., problem size, number of processors, asynchronous / synchronous communications, etc.) using a variety of strategies for mapping the mesh partition onto the processor topology.
Resumo:
Parallel computing is now widely used in numerical simulation, particularly for application codes based on finite difference and finite element methods. A popular and successful technique employed to parallelize such codes onto large distributed memory systems is to partition the mesh into sub-domains that are then allocated to processors. The code then executes in parallel, using the SPMD methodology, with message passing for inter-processor interactions. In order to improve the parallel efficiency of an imbalanced structured mesh CFD code, a new dynamic load balancing (DLB) strategy has been developed in which the processor partition range limits of just one of the partitioned dimensions uses non-coincidental limits, as opposed to coincidental limits. The ‘local’ partition limit change allows greater flexibility in obtaining a balanced load distribution, as the workload increase, or decrease, on a processor is no longer restricted by the ‘global’ (coincidental) limit change. The automatic implementation of this generic DLB strategy within an existing parallel code is presented in this chapter, along with some preliminary results.
Resumo:
This chapter discusses the code parallelization environment, where a number of tools that address the main tasks, such as code parallelization, debugging, and optimization are available. The parallelization tools include ParaWise and CAPO, which enable the near automatic parallelization of real world scientific application codes for shared and distributed memory-based parallel systems. The chapter discusses the use of ParaWise and CAPO to transform the original serial code into an equivalent parallel code that contains appropriate OpenMP directives. Additionally, as user involvement can introduce errors, a relative debugging tool (P2d2) is also available and can be used to perform near automatic relative debugging of an OpenMP program that has been parallelized either using the tools or manually. In order for these tools to be effective in parallelizing a range of applications, a high quality fully inter-procedural dependence analysis, as well as user interaction is vital to the generation of efficient parallel code and in the optimization of the backtracking and speculation process used in relative debugging. Results of parallelized NASA codes are discussed and show the benefits of using the environment.
Resumo:
[This abstract is based on the authors' abstract.]Three new standards to be applied when adopting commercial computer off-the-shelf (COTS) software solutions are discussed. The first standard is for a COTS software life cycle, the second for a software solution user requirements life cycle, and the third is a checklist to help in completing the requirements. The standards are based on recent major COTS software solution implementations.
Resumo:
This study finds evidence that attempts to reduce costs and error rates in the Inland Revenue through the use of e-commerce technology are flawed. While it is technically possible to write software that will record tax data, and then transmit it to the Inland Revenue, there is little demand for this service. The key finding is that the tax system is so complex that many people are unable to complete their own tax returns. This complexity cannot be overcome by well-designed software. The recommendation is to encourage the use of agents to assist taxpayers or simplify the tax system. The Inland Revenue is interested in saving administrative costs and errors by encouraging electronic submission of tax returns. To achieve these objectives, given the raw data it would seem clear that the focus should be on facilitating the work of agents.
Resumo:
Modelling and control of nonlinear dynamical systems is a challenging problem since the dynamics of such systems change over their parameter space. Conventional methodologies for designing nonlinear control laws, such as gain scheduling, are effective because the designer partitions the overall complex control into a number of simpler sub-tasks. This paper describes a new genetic algorithm based method for the design of a modular neural network (MNN) control architecture that learns such partitions of an overall complex control task. Here a chromosome represents both the structure and parameters of an individual neural network in the MNN controller and a hierarchical fuzzy approach is used to select the chromosomes required to accomplish a given control task. This new strategy is applied to the end-point tracking of a single-link flexible manipulator modelled from experimental data. Results show that the MNN controller is simple to design and produces superior performance compared to a single neural network (SNN) controller which is theoretically capable of achieving the desired trajectory. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
This paper shows how the concepts of lean manufacturing can be successfully applied to software development. The key lean concept is to have a minimum of work in progress, which forces problems into the open. The time is then taken to fix the production system so the errors will not occur again.
Resumo:
The use of accelerators, with compute architectures different and distinct from the CPU, has become a new research frontier in high-performance computing over the past ?ve years. This paper is a case study on how the instruction-level parallelism offered by three accelerator technologies, FPGA, GPU and ClearSpeed, can be exploited in atomic physics. The algorithm studied is the evaluation of two electron integrals, using direct numerical quadrature, a task that arises in the study of intermediate energy electron scattering by hydrogen atoms. The results of our ‘productivity’ study show that while each accelerator is viable, there are considerable differences in the implementation strategies that must be followed on each.
Resumo:
The efficient development of multi-threaded software has, for many years, been an unsolved problem in computer science. Finding a solution to this problem has become urgent with the advent of multi-core processors. Furthermore, the problem has become more complicated because multi-cores are everywhere (desktop, laptop, embedded system). As such, they execute generic programs which exhibit very different characteristics than the scientific applications that have been the focus of parallel computing in the past.
Implicitly parallel programming is an approach to parallel pro- gramming that promises high productivity and efficiency and rules out synchronization errors and race conditions by design. There are two main ingredients to implicitly parallel programming: (i) a con- ventional sequential programming language that is extended with annotations that describe the semantics of the program and (ii) an automatic parallelizing compiler that uses the annotations to in- crease the degree of parallelization.
It is extremely important that the annotations and the automatic parallelizing compiler are designed with the target application do- main in mind. In this paper, we discuss the Paralax approach to im- plicitly parallel programming and we review how the annotations and the compiler design help to successfully parallelize generic programs. We evaluate Paralax on SPECint benchmarks, which are a model for such programs, and demonstrate scalable speedups, up to a factor of 6 on 8 cores.