742 resultados para Cloud Computing Modelli di Business
Resumo:
Despite the fact that customer retention is crucial for providers of cloud enterprise systems, only little attention has been directed towards investigating the antecedents of subscription renewal in an organizational context. This is even more surprising, as cloud services are usually offered as subscription-based pricing models with the (theoretical) possibility of immediate service cancellation, strongly opposing classical long-term IT-Outsourcing contracts or license-based payment plans of on premise enterprise systems. To close this research gap an empirical study was undertaken. Firstly, a conceptual model was drawn from theories of social psychology, organizational system continuance and IS success. The model was subsequently tested using survey responses of senior management within companies which adopted cloud enterprise systems. Gathered data was then analysed using PLS. The results indicate that subscription renewal intention is influenced by both – social-related and technology-specific factors – which are able to explain 50.4% of the variance in the dependent variable. Beneath the cloud enterprise systems specific contributions, the work advances knowledge in the area of organizational system continuance, as well as IS success.
Resumo:
The competent leadership of digital transformation needs to involve the board of directors. The reported lack of such capability in boards is becoming a pressing issue. A part of leadership in such transformation is the board of director’s competence to lead Enterprise Business Technology Governance (EBTG). In this paper we take the position that EBTG competencies are essential in boards, because competent EBTG has been shown to contribute to increased revenue, profit, and returns. We update and expand on the results of a multi-method approach to the development of a set of three board of director competencies needed for effective EBTG.
Resumo:
Cloud Computing, based on early virtual computer concepts and technologies, is now itself a maturing technology in the marketplace and it has revolutionized the IT industry, being the powerful platform that many businesses are choosing to migrate their in-premises IT services onto. Cloud solution has the potential to reduce the capital and operational expenses associated with deploying IT services on their own. In this study, we have implemented our own private cloud solution, infrastructure as a service (IaaS), using the OpenStack platform with high availability and a dynamic resource allocation mechanism. Besides, we have hosted unified communication as a service (UCaaS) in the underlying IaaS and successfully tested voice over IP (VoIP), video conferencing, voice mail and instant messaging (IM) with clients located at the remote site. The proposed solution has been developed in order to give advice to bussinesses that want to build their own cloud environment, IaaS and host cloud services and applicatons in the cloud. This paper also aims at providing an alternate option for proprietary cloud solutions for service providers to consider.
Resumo:
For the past few years, research works on the topic of secure outsourcing of cryptographic computations has drawn significant attention from academics in security and cryptology disciplines as well as information security practitioners. One main reason for this interest is their application for resource constrained devices such as RFID tags. While there has been significant progress in this domain since Hohenberger and Lysyanskaya have provided formal security notions for secure computation delegation, there are some interesting challenges that need to be solved that can be useful towards a wider deployment of cryptographic protocols that enable secure outsourcing of cryptographic computations. This position paper brings out these challenging problems with RFID technology as the use case together with our ideas, where applicable, that can provide a direction towards solving the problems.
Resumo:
Quality of Service (QoS) is a new issue in cloud-based MapReduce, which is a popular computation model for parallel and distributed processing of big data. QoS guarantee is challenging in a dynamical computation environment due to the fact that a fixed resource allocation may become under-provisioning, which leads to QoS violation, or over-provisioning, which increases unnecessary resource cost. This requires runtime resource scaling to adapt environmental changes for QoS guarantee. Aiming to guarantee the QoS, which is referred as to hard deadline in this work, this paper develops a theory to determine how and when resource is scaled up/down for cloud-based MapReduce. The theory employs a nonlinear transformation to define the problem in a reverse resource space, simplifying the theoretical analysis significantly. Then, theoretical results are presented in three theorems on sufficient conditions for guaranteeing the QoS of cloud-based MapReduce. The superiority and applications of the theory are demonstrated through case studies.
Resumo:
There are many applications such as software for processing customer records in telecom, patient records in hospitals, email processing software accessing a single email in a mailbox etc. which require to access a single record in a database consisting of millions of records. A basic feature of these applications is that they need to access data sets which are very large but simple. Cloud computing provides computing requirements for these kinds of new generation of applications involving very large data sets which cannot possibly be handled efficiently using traditional computing infrastructure. In this paper, we describe storage services provided by three well-known cloud service providers and give a comparison of their features with a view to characterize storage requirements of very large data sets as examples and we hope that it would act as a catalyst for the design of storage services for very large data set requirements in future. We also give a brief overview of other kinds of storage that have come up in the recent past for cloud computing.
Resumo:
In this paper we present a combination of technologies to provide an Energy-on-Demand (EoD) service to enable low cost innovation suitable for microgrid networks. The system is designed around the low cost and simple Rural Energy Device (RED) Box which in combination with Short Message Service (SMS) communication methodology serves as an elementary proxy for Smart meters which are typically used in urban settings. Further, customer behavior and familiarity in using such devices based on mobile experience has been incorporated into the design philosophy. Customers are incentivized to interact with the system thus providing valuable behavioral and usage data to the Utility Service Provider (USP). Data that is collected over time can be used by the USP for analytics envisioned by using remote computing services known as cloud computing service. Cloud computing allows for a sharing of computational resources at the virtual level across several networks. The customer-system interaction is facilitated by a third party Telecom Service provider (TSP). The approximate cost of the RED Box is envisaged to be under USD 10 on production scale.
Resumo:
Virtualization is one of the key enabling technologies for Cloud computing. Although it facilitates improved utilization of resources, virtualization can lead to performance degradation due to the sharing of physical resources like CPU, memory, network interfaces, disk controllers, etc. Multi-tenancy can cause highly unpredictable performance for concurrent I/O applications running inside virtual machines that share local disk storage in Cloud. Disk I/O requests in a typical Cloud setup may have varied requirements in terms of latency and throughput as they arise from a range of heterogeneous applications having diverse performance goals. This necessitates providing differential performance services to different I/O applications. In this paper, we present PriDyn, a novel scheduling framework which is designed to consider I/O performance metrics of applications such as acceptable latency and convert them to an appropriate priority value for disk access based on the current system state. This framework aims to provide differentiated I/O service to various applications and ensures predictable performance for critical applications in multi-tenant Cloud environment. We demonstrate through experimental validations on real world I/O traces that this framework achieves appreciable enhancements in I/O performance, indicating that this approach is a promising step towards enabling QoS guarantees on Cloud storage.
Resumo:
One of the most challenging problems in mobile broadband networks is how to assign the available radio resources among the different mobile users. Traditionally, research proposals are either speci c to some type of traffic or deal with computationally intensive algorithms aimed at optimizing the delivery of general purpose traffic. Consequently, commercial networks do not incorporate these mechanisms due to the limited hardware resources at the mobile edge. Emerging 5G architectures introduce cloud computing principles to add flexible computational resources to Radio Access Networks. This paper makes use of the Mobile Edge Computing concepts to introduce a new element, denoted as Mobile Edge Scheduler, aimed at minimizing the mean delay of general traffic flows in the LTE downlink. This element runs close to the eNodeB element and implements a novel flow-aware and channel-aware scheduling policy in order to accommodate the transmissions to the available channel quality of end users.
Resumo:
Implementations are presented of two common algorithms for integer factorization, Pollard’s “p – 1” method and the SQUFOF method. The algorithms are implemented in the F# language, a functional programming language developed by Microsoft and officially released for the first time in 2010. The algorithms are thoroughly tested on a set of large integers (up to 64 bits in size), running both on a physical machine and a Windows Azure machine instance. Analysis of the relative performance between the two environments indicates comparable performance when taking into account the difference in computing power. Further analysis reveals that the relative performance of the Azure implementation tends to improve as the magnitudes of the integers increase, indicating that such an approach may be suitable for larger, more complex factorization tasks. Finally, several questions are presented for future research, including the performance of F# and related languages for more efficient, parallelizable algorithms, and the relative cost and performance of factorization algorithms in various environments, including physical hardware and commercial cloud computing offerings from the various vendors in the industry.
Resumo:
The increasing penetration rate of feature rich mobile devices such as smartphones and tablets in the global population has resulted in a large number of applications and services being created or modified to support mobile devices. Mobile cloud computing is a proposed paradigm to address the resource scarcity of mobile devices in the face of demand for more computing intensive tasks. Several approaches have been proposed to confront the challenges of mobile cloud computing, but none has used the user experience as the primary focus point. In this paper we evaluate these approaches in respect of the user experience, propose what future research directions in this area require to provide for this crucial aspect, and introduce our own solution.
Resumo:
Nearly one billion smart mobile devices are now used for a growing number of tasks, such as browsing the web and accessing online services. In many communities, such devices are becoming the platform of choice for tasks traditionally carried out on a personal computer. However, despite the advances, these devices are still lacking in resources compared to their traditional desktop counterparts. Mobile cloud computing is seen as a new paradigm that can address the resource shortcomings in these devices with the plentiful computing resources of the cloud. This can enable the mobile device to be used for a large range of new applications hosted in the cloud that are too resource demanding to run locally. Bringing these two technologies together presents various difficulties. In this paper, we examine the advantages of the mobile cloud and the new approaches to applications it enables. We present our own solution to create a positive user experience for such applications and describe how it enables these applications.
Resumo:
The mobile cloud computing model promises to address the resource limitations of mobile devices, but effectively implementing this model is difficult. Previous work on mobile cloud computing has required the user to have a continuous, high-quality connection to the cloud infrastructure. This is undesirable and possibly infeasible, as the energy required on the mobile device to maintain a connection, and transfer sizeable amounts of data is large; the bandwidth tends to be quite variable, and low on cellular networks. The cloud deployment itself needs to efficiently allocate scalable resources to the user as well. In this paper, we formulate the best practices for efficiently managing the resources required for the mobile cloud model, namely energy, bandwidth and cloud computing resources. These practices can be realised with our mobile cloud middleware project, featuring the Cloud Personal Assistant (CPA). We compare this with the other approaches in the area, to highlight the importance of minimising the usage of these resources, and therefore ensure successful adoption of the model by end users. Based on results from experiments performed with mobile devices, we develop a no-overhead decision model for task and data offloading to the CPA of a user, which provides efficient management of mobile cloud resources.
Resumo:
Cloud services provide its users with flexible resource provisioning. But in the current market, a user has to choose from a limited set of configurations at a fixed price. This paper presents an autonomous negotiation system termed CloudNeg for negotiating cloud services. CloudNeg provides buyers and sellers of cloud services with autonomous agents to negotiate on the specifications of a cloud instance, including price, on their behalf. These agents elicit their buyers’ time preferences and use them in negotiations. Further, this paper presents two artifacts: a negotiation algorithm and a prototype which together form CloudNeg.
Resumo:
The mobile cloud computing paradigm can offer relevant and useful services to the users of smart mobile devices. Such public services already exist on the web and in cloud deployments, by implementing common web service standards. However, these services are described by mark-up languages, such as XML, that cannot be comprehended by non-specialists. Furthermore, the lack of common interfaces for related services makes discovery and consumption difficult for both users and software. The problem of service description, discovery, and consumption for the mobile cloud must be addressed to allow users to benefit from these services on mobile devices. This paper introduces our work on a mobile cloud service discovery solution, which is utilised by our mobile cloud middleware, Context Aware Mobile Cloud Services (CAMCS). The aim of our approach is to remove complex mark-up languages from the description and discovery process. By means of the Cloud Personal Assistant (CPA) assigned to each user of CAMCS, relevant mobile cloud services can be discovered and consumed easily by the end user from the mobile device. We present the discovery process, the architecture of our own service registry, and service description structure. CAMCS allows services to be used from the mobile device through a user's CPA, by means of user defined tasks. We present the task model of the CPA enabled by our solution, including automatic tasks, which can perform work for the user without an explicit request.