823 resultados para Ökonophysik, GPU-Computing
Resumo:
Purpose – While many studies have predominantly looked at the benefits and risks of cloud computing, little is known whether and to what extent institutional forces play a role in cloud computing adoption. The purpose of this paper is to explore the role of institutional factors in top management team’s (TMT’s) decision to adopt cloud computing services. Design/methodology/approach – A model is developed and tested with data from an Australian survey using the partial least squares modeling technique. Findings – The results suggest that mimetic and coercive pressures influence TMT’s beliefs in the benefits of cloud computing. The results also show that TMT’s beliefs drive TMT’s participation, which in turn affects the intention to increase the adoption of cloud computing solutions. Research limitations/implications – Future studies could incorporate the influences of local actors who might also press for innovation. Practical implications – Given the influence of institutional forces and the plethora of cloud-based solutions on the market, it is recommended that TMTs exercise a high degree of caution when deciding for the types of applications to be outsourced as organizational requirements in terms of performance and security will differ. Originality/value – The paper contributes to the growing empirical literature on cloud computing adoption and offers the institutional framework as an alternative lens with which to interpret cloud-based information technology outsourcing.
Resumo:
Adopting a multi-theoretical approach, I examine external auditors’ perceptions of the reasons why organizations do or do not adopt cloud computing. I interview forensic accountants and IT experts about the adoption, acceptance, institutional motives, and risks of cloud computing. Although the medium to large accounting firms where the external auditors worked almost exclusively used private clouds, both private and public cloud services were gaining a foothold among many of their clients. Despite the advantages of cloud computing, data confidentiality and the involvement of foreign jurisdictions remain a concern, particularly if the data are moved outside Australia. Additionally, some organizations seem to understand neither the technology itself nor their own requirements, which may lead to poorly negotiated contracts and service agreements. To minimize the risks associated with cloud computing, many organizations turn to hybrid solutions or private clouds that include national or dedicated data centers. To the best of my knowledge, this is the first empirical study that reports on cloud computing adoption from the perspectives of external auditors.
Resumo:
In this paper, we present a decentralized dynamic load scheduling/balancing algorithm called ELISA (Estimated Load Information Scheduling Algorithm) for general purpose distributed computing systems. ELISA uses estimated state information based upon periodic exchange of exact state information between neighbouring nodes to perform load scheduling. The primary objective of the algorithm is to cut down on the communication and load transfer overheads by minimizing the frequency of status exchange and by restricting the load transfer and status exchange within the buddy set of a processor. It is shown that the resulting algorithm performs almost as well as a perfect information algorithm and is superior to other load balancing schemes based on the random sharing and Ni-Hwang algorithms. A sensitivity analysis to study the effect of various design parameters on the effectiveness of load balancing is also carried out. Finally, the algorithm's performance is tested on large dimensional hypercubes in the presence of time-varying load arrival process and is shown to perform well in comparison to other algorithms. This makes ELISA a viable and implementable load balancing algorithm for use in general purpose distributed computing systems.
Resumo:
Simultaneous consideration of both performance and reliability issues is important in the choice of computer architectures for real-time aerospace applications. One of the requirements for such a fault-tolerant computer system is the characteristic of graceful degradation. A shared and replicated resources computing system represents such an architecture. In this paper, a combinatorial model is used for the evaluation of the instruction execution rate of a degradable, replicated resources computing system such as a modular multiprocessor system. Next, a method is presented to evaluate the computation reliability of such a system utilizing a reliability graph model and the instruction execution rate. Finally, this computation reliability measure, which simultaneously describes both performance and reliability, is applied as a constraint in an architecture optimization model for such computing systems. Index Terms-Architecture optimization, computation
Resumo:
This paper is aimed at reviewing the notion of Byzantine-resilient distributed computing systems, the relevant protocols and their possible applications as reported in the literature. The three agreement problems, namely, the consensus problem, the interactive consistency problem, and the generals problem have been discussed. Various agreement protocols for the Byzantine generals problem have been summarized in terms of their performance and level of fault-tolerance. The three classes of Byzantine agreement protocols discussed are the deterministic, randomized, and approximate agreement protocols. Finally, application of the Byzantine agreement protocols to clock synchronization is highlighted.
Resumo:
The Reeb graph tracks topology changes in level sets of a scalar function and finds applications in scientific visualization and geometric modeling. We describe an algorithm that constructs the Reeb graph of a Morse function defined on a 3-manifold. Our algorithm maintains connected components of the two dimensional levels sets as a dynamic graph and constructs the Reeb graph in O(nlogn+nlogg(loglogg)3) time, where n is the number of triangles in the tetrahedral mesh representing the 3-manifold and g is the maximum genus over all level sets of the function. We extend this algorithm to construct Reeb graphs of d-manifolds in O(nlogn(loglogn)3) time, where n is the number of triangles in the simplicial complex that represents the d-manifold. Our result is a significant improvement over the previously known O(n2) algorithm. Finally, we present experimental results of our implementation and demonstrate that our algorithm for 3-manifolds performs efficiently in practice.
Resumo:
Onboard spacecraft computing system is a case of a functionally distributed system that requires continuous interaction among the nodes to control the operations at different nodes. A simple and reliable protocol is desired for such an application. This paper discusses a formal approach to specify the computing system with respect to some important issues encountered in the design and development of a protocol for the onboard distributed system. The issues considered in this paper are concurrency, exclusiveness and sequencing relationships among the various processes at different nodes. A 6-tuple model is developed for the precise specification of the system. The model also enables us to check the consistency of specification and deadlock caused due to improper specification. An example is given to illustrate the use of the proposed methodology for a typical spacecraft configuration. Although the theory is motivated by a specific application the same may be applied to other distributed computing system such as those encountered in process control industries, power plant control and other similar environments.
Resumo:
The StreamIt programming model has been proposed to exploit parallelism in streaming applications oil general purpose multicore architectures. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on accelerators such as Graphics Processing Units (GPUs) or CellBE which support abundant parallelism in hardware. In this paper, we describe a novel method to orchestrate the execution of if StreamIt program oil a multicore platform equipped with an accelerator. The proposed approach identifies, using profiling, the relative benefits of executing a task oil the superscalar CPU cores and the accelerator. We formulate the problem of partitioning the work between the CPU cores and the GPU, taking into account the latencies for data transfers and the required buffer layout transformations associated with the partitioning, as all integrated Integer Linear Program (ILP) which can then be solved by an ILP solver. We also propose an efficient heuristic algorithm for the work-partitioning between the CPU and the GPU, which provides solutions which are within 9.05% of the optimal solution on an average across the benchmark Suite. The partitioned tasks are then software pipelined to execute oil the multiple CPU cores and the Streaming Multiprocessors (SMs) of the GPU. The software pipelining algorithm orchestrates the execution between CPU cores and the GPU by emitting the code for the CPU and the GPU, and the code for the required data transfers. Our experiments on a platform with 8 CPU cores and a GeForce 8800 GTS 512 GPU show a geometric mean speedup of 6.94X with it maximum of 51.96X over it single threaded CPU execution across the StreamIt benchmarks. This is a 18.9% improvement over it partitioning strategy that maps only the filters that cannot be executed oil the GPU - the filters with state that is persistent across firings - onto the CPU.
Resumo:
The existing internet computing resource, Biomolecules Segment Display Device (BSDD), has been updated with several additional useful features. An advanced option is provided to superpose the structural motifs obtained from a search on the Protein Data Bank (PDB) in order to see if the three-dimensional structures adopted by identical or similar sequence motifs are the same. Furthermore, the options to display structural aspects like inter- and intra-molecular interactions, ion-pairs, disulphide bonds, etc. have been provided.The updated resource is interfaced with an up-to-date copy of the public domain PDB as well as 25 and 90% non-redundant protein structures. Further, users can upload the three-dimensional atomic coordinates (PDB format) from the client machine. A free molecular graphics program, JMol, is interfaced with it to display the three-dimensional structures.
Resumo:
Effectiveness evaluation of aerospace fault-tolerant computing systems used in a phased-mission environment is rather tricky and difficult because of the interaction of its several degraded performance levels with the multiple objectives of the mission and the use environment. Part I uses an approach based on multiobjective phased-mission analysis to evaluate the effectiveness of a distributed avionics architecture used in a transport aircraft. Part II views the computing system as a multistate s-coherent structure. Lower bounds on the probabilities of accomplishing various levels of performance are evaluated.
Resumo:
This research studied distributed computing of all-to-all comparison problems with big data sets. The thesis formalised the problem, and developed a high-performance and scalable computing framework with a programming model, data distribution strategies and task scheduling policies to solve the problem. The study considered storage usage, data locality and load balancing for performance improvement in solving the problem. The research outcomes can be applied in bioinformatics, biometrics and data mining and other domains in which all-to-all comparisons are a typical computing pattern.
Resumo:
Location management problem that arise in mobile computing networks is addressed. One method used in location management is to designate sonic of the cells in the network as "reporting cells". The other cells in the network are "non-reporting cells". Finding an optimal set of reporting cells (or reporting cell configuration) for a given network. is a difficult combinatorial optimization problem. In fact this is shown to be an NP-complete problem. in an earlier study. In this paper, we use the selective paging strategy and use an ant colony optimization method to obtain the best/optimal set of reporting cells for a given a network.
Resumo:
Minimum Description Length (MDL) is an information-theoretic principle that can be used for model selection and other statistical inference tasks. There are various ways to use the principle in practice. One theoretically valid way is to use the normalized maximum likelihood (NML) criterion. Due to computational difficulties, this approach has not been used very often. This thesis presents efficient floating-point algorithms that make it possible to compute the NML for multinomial, Naive Bayes and Bayesian forest models. None of the presented algorithms rely on asymptotic analysis and with the first two model classes we also discuss how to compute exact rational number solutions.
Resumo:
Ubiquitous computing is about making computers and computerized artefacts a pervasive part of our everyday lifes, bringing more and more activities into the realm of information. The computationalization, informationalization of everyday activities increases not only our reach, efficiency and capabilities but also the amount and kinds of data gathered about us and our activities. In this thesis, I explore how information systems can be constructed so that they handle this personal data in a reasonable manner. The thesis provides two kinds of results: on one hand, tools and methods for both the construction as well as the evaluation of ubiquitous and mobile systems---on the other hand an evaluation of the privacy aspects of a ubiquitous social awareness system. The work emphasises real-world experiments as the most important way to study privacy. Additionally, the state of current information systems as regards data protection is studied. The tools and methods in this thesis consist of three distinct contributions. An algorithm for locationing in cellular networks is proposed that does not require the location information to be revealed beyond the user's terminal. A prototyping platform for the creation of context-aware ubiquitous applications called ContextPhone is described and released as open source. Finally, a set of methodological findings for the use of smartphones in social scientific field research is reported. A central contribution of this thesis are the pragmatic tools that allow other researchers to carry out experiments. The evaluation of the ubiquitous social awareness application ContextContacts covers both the usage of the system in general as well as an analysis of privacy implications. The usage of the system is analyzed in the light of how users make inferences of others based on real-time contextual cues mediated by the system, based on several long-term field studies. The analysis of privacy implications draws together the social psychological theory of self-presentation and research in privacy for ubiquitous computing, deriving a set of design guidelines for such systems. The main findings from these studies can be summarized as follows: The fact that ubiquitous computing systems gather more data about users can be used to not only study the use of such systems in an effort to create better systems but in general to study phenomena previously unstudied, such as the dynamic change of social networks. Systems that let people create new ways of presenting themselves to others can be fun for the users---but the self-presentation requires several thoughtful design decisions that allow the manipulation of the image mediated by the system. Finally, the growing amount of computational resources available to the users can be used to allow them to use the data themselves, rather than just being passive subjects of data gathering.
Resumo:
There is an increase in the uptake of cloud computing services (CCS). CCS is adopted in the form of a utility, and it incorporates business risks of the service providers and intermediaries. Thus, the adoption of CCS will change the risk profile of an organization. In this situation, organisations need to develop competencies by reconsidering their IT governance structures to achieve a desired level of IT-business alignment and maintain their risk appetite to source business value from CCS. We use the resource-based theories to suggest that collaborative board oversight of CCS, competencies relating to CCS information and financial management, and a CCS-related continuous audit program can contribute to business process performance improvements and overall firm performance. Using survey data, we find evidence of a positive association between these IT governance considerations and business process performance. We also find evidence of positive association between business process performance improvements and overall firm performance. The results suggest that the suggested considerations on IT governance structures can contribute to CCS-related IT-business alignment and lead to anticipated business value from CCS. This study provides guidance to organizations on competencies required to secure business value from CCS.