119 resultados para Cloud computing, OpenNebula, sincronizzazione, replica, wide area network
Resumo:
The term “cloud computing” has emerged as a major ICT trend and has been acknowledged by respected industry survey organizations as a key technology and market development theme for the industry and ICT users in 2010. However, one of the major challenges that faces the cloud computing concept and its global acceptance is how to secure and protect the data and processes that are the property of the user. The security of the cloud computing environment is a new research area requiring further development by both the academic and industrial research communities. Today, there are many diverse and uncoordinated efforts underway to address security issues in cloud computing and, especially, the identity management issues. This paper introduces an architecture for a new approach to necessary “mutual protection” in the cloud computing environment, based upon a concept of mutual trust and the specification of definable profiles in vector matrix form. The architecture aims to achieve better, more generic and flexible authentication, authorization and control, based on a concept of mutuality, within that cloud computing environment.
Resumo:
With the widespread application of healthcare Information and Communication Technology (ICT), constructing a stable and sustainable data sharing circumstance has attracted rapidly growing attention in both academic research area and healthcare industry. Cloud computing is one of long dreamed visions of Healthcare Cloud (HC), which matches the need of healthcare information sharing directly to various health providers over the Internet, regardless of their location and the amount of data. In this paper, we discuss important research tool related to health information sharing and integration in HC and investigate the arising challenges and issues. We describe many potential solutions to provide more opportunities to implement EHR cloud. As well, we introduce the development of a HC related collaborative healthcare research example, thus illustrating the prospective of applying Cloud Computing in the health information science research.
Resumo:
Restoring a large-scale power system has always been a complicated and important issue. A lot of research work has been done on different aspects of the whole power system restoration procedure. However, more time will be required to complete the power system restoration process in an actual situation if accurate and real-time system data cannot be obtained. With the development of the wide area monitoring system (WAMS), power system operators are capable of accessing to more accurate data in the restoration stage after a major outage. The ultimate goal of the system restoration is to restore as much load as possible while in the shortest period of time after a blackout, and the restorable load can be estimated by employing WAMS. Moreover, discrete restorable loads are employed considering the limited number of circuit-breaker operations and the practical topology of distribution systems. In this work, a restorable load estimation method is proposed employing WAMS data after the network frame has been reenergized, and WAMS is also employed to monitor the system parameters in case the newly recovered system becomes unstable again. The proposed method has been validated with the New England 39-Bus system and an actual power system in Guangzhou, China.
Resumo:
The topic of “the cloud” has attracted significant attention throughout the past few years (Cherry 2009; Sterling and Stark 2009) and, as a result, academics and trade journals have created several competing definitions of “cloud computing” (e.g., Motahari-Nezhad et al. 2009). Underpinning this article is the definition put forward by the US National Institute of Standards and Technology, which describes cloud computing as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction” (Garfinkel 2011, p. 3). Despite the lack of consensus about definitions, however, there is broad agreement on the growing demand for cloud computing. Some estimates suggest that spending on cloudrelated technologies and services in the next few years may climb as high as USD 42 billion/year (Buyya et al. 2009).
Resumo:
Enterprises, both public and private, have rapidly commenced using the benefits of enterprise resource planning (ERP) combined with business analytics and “open data sets” which are often outside the control of the enterprise to gain further efficiencies, build new service operations and increase business activity. In many cases, these business activities are based around relevant software systems hosted in a “cloud computing” environment. “Garbage in, garbage out”, or “GIGO”, is a term long used to describe problems in unqualified dependency on information systems, dating from the 1960s. However, a more pertinent variation arose sometime later, namely “garbage in, gospel out” signifying that with large scale information systems, such as ERP and usage of open datasets in a cloud environment, the ability to verify the authenticity of those data sets used may be almost impossible, resulting in dependence upon questionable results. Illicit data set “impersonation” becomes a reality. At the same time the ability to audit such results may be an important requirement, particularly in the public sector. This paper discusses the need for enhancement of identity, reliability, authenticity and audit services, including naming and addressing services, in this emerging environment and analyses some current technologies that are offered and which may be appropriate. However, severe limitations to addressing these requirements have been identified and the paper proposes further research work in the area.
Resumo:
Cloud computing has significantly impacted a broad range of industries, but these technologies and services have been absorbed throughout the marketplace unevenly. Some industries have moved aggressively towards cloud computing, while others have moved much more slowly. For the most part, the energy sector has approached cloud computing in a measured and cautious way, with progress often in the form of private cloud solutions rather than public ones, or hybridized information technology systems that combine cloud and existing non-cloud architectures. By moving towards cloud computing in a very slow and tentative way, however, the energy industry may prevent itself from reaping the full benefit that a more complete migration to the public cloud has brought about in several other industries. This short communication is accordingly intended to offer a high-level overview of cloud computing, and to put forward the argument that the energy sector should make a more complete migration to the public cloud in order to unlock the major system-wide efficiencies that cloud computing can provide. Also, assets within the energy sector should be designed with as much modularity and flexibility as possible so that they are not locked out of cloud-friendly options in the future.
Resumo:
A key challenge of wide area kinematic positioning is to overcome the effects of the varying hardware biases in code signals of the BeiDou system. Based on three geometryfree/ionosphere-free combinations, the elevation-dependent code biases are modelled for all BeiDou satellites. Results from the data sets of 30-day for 5 baselines of 533 to 2545 km demonstrate that the wide-lane (WL) integer-fixing success rates of 98% to 100% can be achieved within 25 min. Under the condition of HDOP of less than 2, the overall RMS statistics show that ionospheric-free WL single-epoch solutions achieve 24 to 50 cm in the horizontal direction. Smoothing processing over the moving window of 20 min reduces the RMS values by a factor of about 2. Considering distance-independent nature, the above results show the potential that reliable and high precision positioning services could be provided in a wide area based on a sparsely distributed ground network.
Resumo:
In cloud computing, resource allocation and scheduling of multiple composite web services is an important and challenging problem. This is especially so in a hybrid cloud where there may be some low-cost resources available from private clouds and some high-cost resources from public clouds. Meeting this challenge involves two classical computational problems: one is assigning resources to each of the tasks in the composite web services; the other is scheduling the allocated resources when each resource may be used by multiple tasks at different points of time. In addition, Quality-of-Service (QoS) issues, such as execution time and running costs, must be considered in the resource allocation and scheduling problem. Here we present a Cooperative Coevolutionary Genetic Algorithm (CCGA) to solve the deadline-constrained resource allocation and scheduling problem for multiple composite web services. Experimental results show that our CCGA is both efficient and scalable.
Resumo:
Software as a Service (SaaS) is gaining more and more attention from software users and providers recently. This has raised many new challenges to SaaS providers in providing better SaaSes that suit everyone needs at minimum costs. One of the emerging approaches in tackling this challenge is by delivering the SaaS as a composite SaaS. Delivering it in such an approach has a number of benefits, including flexible offering of the SaaS functions and decreased cost of subscription for users. However, this approach also introduces new problems for SaaS resource management in a Cloud data centre. We present the problem of composite SaaS resource management in Cloud data centre, specifically on its initial placement and resource optimization problems aiming at improving the SaaS performance based on its execution time as well as minimizing the resource usage. Our approach differs from existing literature because it addresses the problems resulting from composite SaaS characteristics, where we focus on the SaaS requirements, constraints and interdependencies. The problems are tackled using evolutionary algorithms. Experimental results demonstrate the efficiency and the scalability of the proposed algorithms.
Resumo:
Recently, Software as a Service (SaaS) in Cloud computing, has become more and more significant among software users and providers. To offer a SaaS with flexible functions at a low cost, SaaS providers have focused on the decomposition of the SaaS functionalities, or known as composite SaaS. This approach has introduced new challenges in SaaS resource management in data centres. One of the challenges is managing the resources allocated to the composite SaaS. Due to the dynamic environment of a Cloud data centre, resources that have been initially allocated to SaaS components may be overloaded or wasted. As such, reconfiguration for the components’ placement is triggered to maintain the performance of the composite SaaS. However, existing approaches often ignore the communication or dependencies between SaaS components in their implementation. In a composite SaaS, it is important to include these elements, as they will directly affect the performance of the SaaS. This paper will propose a Grouping Genetic Algorithm (GGA) for multiple composite SaaS application component clustering in Cloud computing that will address this gap. To the best of our knowledge, this is the first attempt to handle multiple composite SaaS reconfiguration placement in a dynamic Cloud environment. The experimental results demonstrate the feasibility and the scalability of the GGA.
Resumo:
A composite SaaS (Software as a Service) is a software that is comprised of several software components and data components. The composite SaaS placement problem is to determine where each of the components should be deployed in a cloud computing environment such that the performance of the composite SaaS is optimal. From the computational point of view, the composite SaaS placement problem is a large-scale combinatorial optimization problem. Thus, an Iterative Cooperative Co-evolutionary Genetic Algorithm (ICCGA) was proposed. The ICCGA can find reasonable quality of solutions. However, its computation time is noticeably slow. Aiming at improving the computation time, we propose an unsynchronized Parallel Cooperative Co-evolutionary Genetic Algorithm (PCCGA) in this paper. Experimental results have shown that the PCCGA not only has quicker computation time, but also generates better quality of solutions than the ICCGA.
Resumo:
Cloud computing has emerged as a major ICT trend and has been acknowledged as a key theme of industry by prominent ICT organisations. However, one of the major challenges that face the cloud computing concept and its global acceptance is how to secure and protect the data that is the property of the user. The geographic location of cloud data storage centres is an important issue for many organisations and individuals due to the regulations and laws that require data and operations to reside in specific geographic locations. Thus, data owners may need to ensure that their cloud providers do not compromise the SLA contract and move their data into another geographic location. This paper introduces an architecture for a new approach for geographic location assurance, which combines the proof of storage protocol (POS) and the distance-bounding protocol. This allows the client to check where their stored data is located, without relying on the word of the cloud provider. This architecture aims to achieve better security and more flexible geographic assurance within the environment of cloud computing.
Resumo:
Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1=n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal. Funding source Cancer Australia (Department of Health and Ageing) Research Grant 614217
Resumo:
Despite the compelling case for moving towards cloud computing, the upstream oil & gas industry faces several technical challenges—most notably, a pronounced emphasis on data security, a reliance on extremely large data sets, and significant legacy investments in information technology infrastructure—that make a full migration to the public cloud difficult at present. Private and hybrid cloud solutions have consequently emerged within the industry to yield as much benefit from cloud-based technologies as possible while working within these constraints. This paper argues, however, that the move to private and hybrid clouds will very likely prove only to be a temporary stepping stone in the industry's technological evolution. By presenting evidence from other market sectors that have faced similar challenges in their journey to the cloud, we propose that enabling technologies and conditions will probably fall into place in a way that makes the public cloud a far more attractive option for the upstream oil & gas industry in the years ahead. The paper concludes with a discussion about the implications of this projected shift towards the public cloud, and calls for more of the industry's services to be offered through cloud-based “apps.”