692 resultados para cloud computing services
Resumo:
With the introduction of the Personally Controlled Health Record (PCEHR), the Australian public is being asked to accept greater responsibility for their healthcare by taking an active role in the management of personal health information. Although well designed, constructed and intentioned, policy and privacy concerns have resulted in an eHealth model that may impact future health sharing requirements. Hence, as a case study for a consumer eHealth initative in the Australian context, eHealth-as-a-Service (eHaaS) serves as a disruptive step in in the aggregation and transformation of health information for use as real-world knowledge. The strategic value of extending the community Health Record Bank (HRB) model lies in the ability to automatically draw on a multitude of relevant data repositories and sources to create a single source of the truth and to engage market forces to create financial sustainability. The opportunity to transform the beleaguered Australian PCEHR into a realisable and sustainable technology consumption model for patient safety is explored. Moreover, the current clerical focus of healthcare practitioners acting in the role of de facto record keepers is renegotiated to establish a shared knowledge creation landscape of action for safer patient interventions. To achieve this potential however requires a platform that will facilitate efficient and trusted unification of all health information available in real-time across the continuum of care. eHaaS provides a sustainable environment and encouragement to realise this potential.
Resumo:
This paper presents a formative measurement index to assess cloud enterprise systems success. The scale development procedure is based on Moore and Benbasat (1991), including newer scale development elements which focus on the creation and assessment of formative constructs. The data is analysed using SmartPLS with a sample of 103 IT decision makers. The results show that the perception of net benefits is shaped not only by enterprise-system-specific factors like productivity improvements and higher quality of business processes, but also by factors which are specifically attributed to cloud systems, such as higher strategic flexibility. Reliability, user requirements and customization contribute most to the overall perception of system quality. Information quality shows no cloud-specific facets and is robust in the context of cloud enterprise systems.
Resumo:
With the introduction of the Personally Controlled Health Record (PCEHR), the Australian public is being asked to accept greater responsibility for their healthcare. Although well designed, constructed and intentioned, policy and privacy concerns have resulted in an eHealth model that may impact future health information sharing requirements. Thus an opportunity to transform the beleaguered Australian PCEHR into a sustainable on-demand technology consumption model for patient safety must be explored further. Moreover, the current clerical focus of healthcare practitioners must be renegotiated to establish a shared knowledge creation landscape of action for safer patient interventions. To achieve this potential however requires a platform that will facilitate efficient and trusted unification of all health information available in real-time across the continuum of care. As a conceptual paper, the goal of the authors is to deliver insights into the antecedents of usage influencing superior patient outcomes within an eHealth-as-a-Service framework. To achieve this, the paper attempts to distil key concepts and identify common themes drawn from a preliminary literature review of eHealth and cloud computing concepts, specifically cloud service orchestration to establish a conceptual framework and a research agenda. Initial findings support the authors’ view that an eHealth-as-a-Service (eHaaS) construct will serve as a disruptive paradigm shift in the aggregation and transformation of health information for use as real-world knowledge in patient care scenarios. Moreover, the strategic value of extending the community Health Record Bank (HRB) model lies in the ability to automatically draw on a multitude of relevant data repositories and sources to create a single source of practice based evidence and to engage market forces to create financial sustainability.
Resumo:
For the past few years, research works on the topic of secure outsourcing of cryptographic computations has drawn significant attention from academics in security and cryptology disciplines as well as information security practitioners. One main reason for this interest is their application for resource constrained devices such as RFID tags. While there has been significant progress in this domain since Hohenberger and Lysyanskaya have provided formal security notions for secure computation delegation, there are some interesting challenges that need to be solved that can be useful towards a wider deployment of cryptographic protocols that enable secure outsourcing of cryptographic computations. This position paper brings out these challenging problems with RFID technology as the use case together with our ideas, where applicable, that can provide a direction towards solving the problems.
Resumo:
With the introduction of the PCEHR (Personally Controlled Electronic Health Record), the Australian public is being asked to accept greater responsibility for the management of their health information. However, the implementation of the PCEHR has occasioned poor adoption rates underscored by criticism from stakeholders with concerns about transparency, accountability, privacy, confidentiality, governance, and limited capabilities. This study adopts an ethnographic lens to observe how information is created and used during the patient journey and the social factors impacting on the adoption of the PCEHR at the micro-level in order to develop a conceptual model that will encourage the sharing of patient information within the cycle of care. Objective: This study aims to firstly, establish a basic understanding of healthcare professional attitudes toward a national platform for sharing patient summary information in the form of a PCEHR. Secondly, the studies aims to map the flow of patient related information as it traverses a patient’s personal cycle of care. Thus, an ethnographic approach was used to bring a “real world” lens to information flow in a series of case studies in the Australian healthcare system to discover themes and issues that are important from the patient’s perspective. Design: Qualitative study utilising ethnographic case studies. Setting: Case studies were conducted at primary and allied healthcare professionals located in Brisbane Queensland between October 2013 and July 2014. Results: In the first dimension, it was identified that healthcare professionals’ concerns about trust and medico-legal issues related to patient control and information quality, and the lack of clinical value available with the PCEHR emerged as significant barriers to use. The second dimension of the study which attempted to map patient information flow identified information quality issues, clinical workflow inefficiencies and interoperability misconceptions resulting in duplication of effort, unnecessary manual processes, data quality and integrity issues and an over reliance on the understanding and communication skills of the patient. Conclusion: Opportunities for process efficiencies, improved data quality and increased patient safety emerge with the adoption of an appropriate information sharing platform. More importantly, large scale eHealth initiatives must be aligned with the value proposition of individual stakeholders in order to achieve widespread adoption. Leveraging an Australian national eHealth infrastructure and the PCEHR we offer a practical example of a service driven digital ecosystem suitable for co-creating value in healthcare.
Resumo:
Quality of Service (QoS) is a new issue in cloud-based MapReduce, which is a popular computation model for parallel and distributed processing of big data. QoS guarantee is challenging in a dynamical computation environment due to the fact that a fixed resource allocation may become under-provisioning, which leads to QoS violation, or over-provisioning, which increases unnecessary resource cost. This requires runtime resource scaling to adapt environmental changes for QoS guarantee. Aiming to guarantee the QoS, which is referred as to hard deadline in this work, this paper develops a theory to determine how and when resource is scaled up/down for cloud-based MapReduce. The theory employs a nonlinear transformation to define the problem in a reverse resource space, simplifying the theoretical analysis significantly. Then, theoretical results are presented in three theorems on sufficient conditions for guaranteeing the QoS of cloud-based MapReduce. The superiority and applications of the theory are demonstrated through case studies.
Resumo:
Distributed Collaborative Computing services have taken over centralized computing platforms allowing the development of distributed collaborative user applications. These applications enable people and computers to work together more productively. Multi-Agent System (MAS) has emerged as a distributed collaborative environment which allows a number of agents to cooperate and interact with each other in a complex environment. We want to place our agents in problems whose solutions require the collation and fusion of information, knowledge or data from distributed and autonomous information sources. In this paper we present the design and implementation of an agent based conference planner application that uses collaborative effort of agents which function continuously and autonomously in a particular environment. The application also enables the collaborative use of services deployed geographically wide in different technologies i.e. Software Agents, Grid computing and Web service. The premise of the application is that it allows autonomous agents interacting with web and grid services to plan a conference as a proxy to their owners (humans). © 2005 IEEE.
Resumo:
One of the most challenging problems in mobile broadband networks is how to assign the available radio resources among the different mobile users. Traditionally, research proposals are either speci c to some type of traffic or deal with computationally intensive algorithms aimed at optimizing the delivery of general purpose traffic. Consequently, commercial networks do not incorporate these mechanisms due to the limited hardware resources at the mobile edge. Emerging 5G architectures introduce cloud computing principles to add flexible computational resources to Radio Access Networks. This paper makes use of the Mobile Edge Computing concepts to introduce a new element, denoted as Mobile Edge Scheduler, aimed at minimizing the mean delay of general traffic flows in the LTE downlink. This element runs close to the eNodeB element and implements a novel flow-aware and channel-aware scheduling policy in order to accommodate the transmissions to the available channel quality of end users.
Resumo:
Implementations are presented of two common algorithms for integer factorization, Pollard’s “p – 1” method and the SQUFOF method. The algorithms are implemented in the F# language, a functional programming language developed by Microsoft and officially released for the first time in 2010. The algorithms are thoroughly tested on a set of large integers (up to 64 bits in size), running both on a physical machine and a Windows Azure machine instance. Analysis of the relative performance between the two environments indicates comparable performance when taking into account the difference in computing power. Further analysis reveals that the relative performance of the Azure implementation tends to improve as the magnitudes of the integers increase, indicating that such an approach may be suitable for larger, more complex factorization tasks. Finally, several questions are presented for future research, including the performance of F# and related languages for more efficient, parallelizable algorithms, and the relative cost and performance of factorization algorithms in various environments, including physical hardware and commercial cloud computing offerings from the various vendors in the industry.
Resumo:
The mobile cloud computing model promises to address the resource limitations of mobile devices, but effectively implementing this model is difficult. Previous work on mobile cloud computing has required the user to have a continuous, high-quality connection to the cloud infrastructure. This is undesirable and possibly infeasible, as the energy required on the mobile device to maintain a connection, and transfer sizeable amounts of data is large; the bandwidth tends to be quite variable, and low on cellular networks. The cloud deployment itself needs to efficiently allocate scalable resources to the user as well. In this paper, we formulate the best practices for efficiently managing the resources required for the mobile cloud model, namely energy, bandwidth and cloud computing resources. These practices can be realised with our mobile cloud middleware project, featuring the Cloud Personal Assistant (CPA). We compare this with the other approaches in the area, to highlight the importance of minimising the usage of these resources, and therefore ensure successful adoption of the model by end users. Based on results from experiments performed with mobile devices, we develop a no-overhead decision model for task and data offloading to the CPA of a user, which provides efficient management of mobile cloud resources.
Resumo:
The scheduling problem in distributed data-intensive computing environments has become an active research topic due to the tremendous growth in grid and cloud computing environments. As an innovative distributed intelligent paradigm, swarm intelligence provides a novel approach to solving these potentially intractable problems. In this paper, we formulate the scheduling problem for work-flow applications with security constraints in distributed data-intensive computing environments and present a novel security constraint model. Several meta-heuristic adaptations to the particle swarm optimization algorithm are introduced to deal with the formulation of efficient schedules. A variable neighborhood particle swarm optimization algorithm is compared with a multi-start particle swarm optimization and multi-start genetic algorithm. Experimental results illustrate that population based meta-heuristics approaches usually provide a good balance between global exploration and local exploitation and their feasibility and effectiveness for scheduling work-flow applications. © 2010 Elsevier Inc. All rights reserved.
Resumo:
Recent advances in hardware development coupled with the rapid adoption and broad applicability of cloud computing have introduced widespread heterogeneity in data centers, significantly complicating the management of cloud applications and data center resources. This paper presents the CACTOS approach to cloud infrastructure automation and optimization, which addresses heterogeneity through a combination of in-depth analysis of application behavior with insights from commercial cloud providers. The aim of the approach is threefold: to model applications and data center resources, to simulate applications and resources for planning and operation, and to optimize application deployment and resource use in an autonomic manner. The approach is based on case studies from the areas of business analytics, enterprise applications, and scientific computing.
Resumo:
Demand Side Management (DSM) plays an important role in Smart Grid. It has large scale access points, massive users, heterogeneous infrastructure and dispersive participants. Moreover, cloud computing which is a service model is characterized by resource on-demand, high reliability and large scale integration and so on and the game theory is a useful tool to the dynamic economic phenomena. In this study, a scheme design of cloud + end technology is proposed to solve technical and economic problems of the DSM. The architecture of cloud + end is designed to solve technical problems in the DSM. In particular, a construct model of cloud + end is presented to solve economic problems in the DSM based on game theories. The proposed method is tested on a DSM cloud + end public service system construction in a city of southern China. The results demonstrate the feasibility of these integrated solutions which can provide a reference for the popularization and application of the DSM in china.
Resumo:
How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computation and storage are to the application that needs to be executed on the cloud. The weights along with cloud benchmarking data are used to generate a ranking of VMs that can maximise performance of the application. The rankings are validated through an empirical analysis using two case study applications, the first is a financial risk application and the second is a molecular dynamics simulation, which are both representative of workloads that can benefit from execution on the cloud. Both case studies validate the feasibility of the methodology and highlight that maximum performance can be achieved on the cloud by selecting the top ranked VMs produced by the methodology.