62 resultados para CLOUDS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

All objects emerge from a cloud of activities, virtual pressures and situated encumbrances that precede their status as finished things. Once emerged, traces of their history linger in the object, signposting a range of past and future potentials that are largely inaccessible – or just unnoticed. Objects, in short, always shimmer with connections beyond themselves, through which they are part of ecologies that render them both meaningful and active. We would call this shimmering their ‘abstract life’. However, this life is rarely identified overtly, and tends to linger in the background, rendering their shimmering vitality more mute than manifest. This paper is interested in how that abstract life can become palpably evident though various forms of collapse, where a fallout throws a kind of dust into the lingering cloud – offering visibility, or material presence, to the otherwise largely invisible, abstract life of things. We will touch upon a series of examples, from the World Trade Centre collapse in the attacks of 2001, to the collapse of computational operations and perceptual models. These examples will lead toward experiments in image making – specifically through using panorama software applications on the iPhone – in which a collapse of the programmed panoramic logic creates ‘glitches’, throwing into question the status of the image and their relationship to perception, amongst other things. These experiments will be discussed in order to demonstrate how collapse might operate as a specific technique inside diverse creative practices (from image making to making architecture). By generating clouds of affective dust, related techniques can bring the abstract ‘life’ of objects flickering into the foreground, allowing the agency of the inanimate to shine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using a child's expression that illustrates his mental image of constituted relations of living things, the author conceptualizes relationality, an interrelated view of being, and its importance for early childhood education. The difference between relation and interaction, and the significance of inter-human relationship are discussed as significant aspects for early childhood teachers to understand as they work with children. In particular, this paper seeks to provide insights into the potential contribution of relationality towards early childhood teaching practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud is becoming a dominant computing platform. Naturally, a question that arises is whether we can beat notorious DDoS attacks in a cloud environment. Researchers have demonstrated that the essential issue of DDoS attack and defense is resource competition between defenders and attackers. A cloud usually possesses profound resources and has full control and dynamic allocation capability of its resources. Therefore, cloud offers us the potential to overcome DDoS attacks. However, individual cloud hosted servers are still vulnerable to DDoS attacks if they still run in the traditional way. In this paper, we propose a dynamic resource allocation strategy to counter DDoS attacks against individual cloud customers. When a DDoS attack occurs, we employ the idle resources of the cloud to clone sufficient intrusion prevention servers for the victim in order to quickly filter out attack packets and guarantee the quality of the service for benign users simultaneously. We establish a mathematical model to approximate the needs of our resource investment based on queueing theory. Through careful system analysis and real-world data set experiments, we conclude that we can defeat DDoS attacks in a cloud environment. © 2013 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data is becoming the world’s new natural resourceand big data use grows quickly. The trend of computingtechnology is that everything is merged into the Internet and‘big data’ are integrated to comprise completeinformation for collective intelligence. With the increasingsize of big data, refining big data themselves to reduce data sizewhile keeping critical data (or useful information) is a newapproach direction. In this paper, we provide a novel dataconsumption model, which separates the consumption of datafrom the raw data, and thus enable cloud computing for bigdata applications. We define a new Data-as-a-Product (DaaP)concept; a data product is a small sized summary of theoriginal data and can directly answer users’ queries. Thus, weseparate the mining of big data into two classes of processingmodules: the refine modules to change raw big data into smallsizeddata products, and application-oriented mining modulesto discover desired knowledge further for applications fromwell-defined data products. Our practices of mining big streamdata, including medical sensor stream data, streams of textdata and trajectory data, demonstrated the efficiency andprecision of our DaaP model for answering users’ queries

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The success of cloud computing makes an increasing number of real-time applications such as signal processing and weather forecasting run in the cloud. Meanwhile, scheduling for real-time tasks is playing an essential role for a cloud provider to maintain its quality of service and enhance the system's performance. In this paper, we devise a novel agent-based scheduling mechanism in cloud computing environment to allocate real-time tasks and dynamically provision resources. In contrast to traditional contract net protocols, we employ a bidirectional announcement-bidding mechanism and the collaborative process consists of three phases, i.e., basic matching phase, forward announcement-bidding phase and backward announcement-bidding phase. Moreover, the elasticity is sufficiently considered while scheduling by dynamically adding virtual machines to improve schedulability. Furthermore, we design calculation rules of the bidding values in both forward and backward announcement-bidding phases and two heuristics for selecting contractors. On the basis of the bidirectional announcement-bidding mechanism, we propose an agent-based dynamic scheduling algorithm named ANGEL for real-time, independent and aperiodic tasks in clouds. Extensive experiments are conducted on CloudSim platform by injecting random synthetic workloads and the workloads from the last version of the Google cloud tracelogs to evaluate the performance of our ANGEL. The experimental results indicate that ANGEL can efficiently solve the real-time task scheduling problem in virtualized clouds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As clouds have been deployed widely in various fields, the reliability and availability of clouds become the major concern of cloud service providers and users. Thereby, fault tolerance in clouds receives a great deal of attention in both industry and academia, especially for real-time applications due to their safety critical nature. Large amounts of researches have been conducted to realize fault tolerance in distributed systems, among which fault-tolerant scheduling plays a significant role. However, few researches on the fault-tolerant scheduling study the virtualization and the elasticity, two key features of clouds, sufficiently. To address this issue, this paper presents a fault-tolerant mechanism which extends the primary-backup model to incorporate the features of clouds. Meanwhile, for the first time, we propose an elastic resource provisioning mechanism in the fault-tolerant context to improve the resource utilization. On the basis of the fault-tolerant mechanism and the elastic resource provisioning mechanism, we design novel fault-tolerant elastic scheduling algorithms for real-time tasks in clouds named FESTAL, aiming at achieving both fault tolerance and high resource utilization in clouds. Extensive experiments injecting with random synthetic workloads as well as the workload from the latest version of the Google cloud tracelogs are conducted by CloudSim to compare FESTAL with three baseline algorithms, i.e., Non-M igration-FESTAL (NMFESTAL), Non-Overlapping-FESTAL (NOFESTAL), and Elastic First Fit (EFF). The experimental results demonstrate that FESTAL is able to effectively enhance the performance of virtualized clouds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the cloud, data is usually stored in ciphertext for security. Attribute-based encryption (ABE) is a popular solution for allowing legal data users to access encrypted data, but it has high overhead and is vulnerable to data leakage. The authors propose an anonymous authorization credential and Lagrange interpolation polynomial-based access control scheme in which an access privilege and one secret share are applied for reconstructing the user's decryption key. Because the credential is anonymously bounded with its owner, only the legal authorized user can access and decrypt the encrypted data without leaking any private information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For multiple heterogeneous multicore server processors across clouds and data centers, the aggregated performance of the cloud of clouds can be optimized by load distribution and balancing. Energy efficiency is one of the most important issues for large-scale server systems in current and future data centers. The multicore processor technology provides new levels of performance and energy efficiency. The present paper aims to develop power and performance constrained load distribution methods for cloud computing in current and future large-scale data centers. In particular, we address the problem of optimal power allocation and load distribution for multiple heterogeneous multicore server processors across clouds and data centers. Our strategy is to formulate optimal power allocation and load distribution for multiple servers in a cloud of clouds as optimization problems, i.e., power constrained performance optimization and performance constrained power optimization. Our research problems in large-scale data centers are well-defined multivariable optimization problems, which explore the power-performance tradeoff by fixing one factor and minimizing the other, from the perspective of optimal load distribution. It is clear that such power and performance optimization is important for a cloud computing provider to efficiently utilize all the available resources. We model a multicore server processor as a queuing system with multiple servers. Our optimization problems are solved for two different models of core speed, where one model assumes that a core runs at zero speed when it is idle, and the other model assumes that a core runs at a constant speed. Our results in this paper provide new theoretical insights into power management and performance optimization in data centers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distributing multiple replicas in geographically-dispersed clouds is a popular approach to reduce latency to users. It is important to ensure that each replica should have availability and data integrity features; that is, the same as the original data without any corruption and tampering. Remote data possession checking is a valid method to verify the replicass availability and integrity. Since remotely checking the entire data is time-consuming due to both the large data volume and the limited bandwidth, efficient data-possession- verifying methods generally sample and check a small hash (or random blocks) of the data to greatly reduce the I/O cost. Most recent research on data possession checking considers only single replica. However, multiple replicas data possession checking is much more challenging, since it is difficult to optimize the remote communication cost among multiple geographically-dispersed clouds. In this paper, we provide a novel efficient Distributed Multiple Replicas Data Possession Checking (DMRDPC) scheme to tackle new challenges. Our goal is to improve efficiency by finding an optimal spanning tree to define the partial order of scheduling multiple replicas data possession checking. But since the bandwidths have geographical diversity on the different replica links and the bandwidths between two replicas are asymmetric, we must resolve the problem of Finding an Optimal Spanning Tree in a Complete Bidirectional Directed Graph, which we call the FOSTCBDG problem. Particularly, we provide theories for resolving the FOSTCBDG problem through counting all the available paths that viruses attack in clouds network environment. Also, we help the cloud users to achieve efficient multiple replicas data possession checking by an approximate algorithm for tackling the FOSTCBDG problem, and the effectiveness is demonstrated by an experimental study. © 2011 Elsevier Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

By 2010, cloud computing had become established as a new model of IT provisioning for service providers. New market players and businesses emerged, threatening the business models of established market players. This teaching case explores the challenges arising through the impact of the new cloud computing technology on an established, multinational IT service provider called ITSP. Should the incumbent vendors adopt cloud computing offerings? And, if so, what form should those offerings take? The teaching case focuses on the strategic dimensions of technological developments, their threats and opportunities. It requires strategic decision making and forecasting under high uncertainty. The critical question is whether cloud computing is a disruptive technology or simply an alternative channel to supply computing resources over the Internet. The case challenges students to assess this new technology and plan ITSP’s responses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the development of the computer, user orientated innovations such as graphical operating systems, mice, and mobile devices have made computing ubiquitous in modern society. The cloud is the next step in this process. Through the cloud, computing has undergone co modification and has been made available as a utility. However, in comparison to other commodities such as water and electricity, clouds (in particular IaaS and PaaS) have not reached the same penetration into the global market. We propose that through further abstraction, future clouds will be ubiquitous and transparent, made accessible to ordinary users and integrated into all aspects of society. This paper presents a concept and path to this ubiquitous and transparent cloud, accessible by the masses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the development of Cloud services and virtual appliances, more and more clients are willing to use these services to host their applications across different Cloud platforms. However, it becomes harder and harder for clients to select the most trustable Cloud service provider to host their applications. In this paper, we propose a Cloud provider selection model to choose the most reliable platform to deploy network appliances. This selection model is based on the trust credibility to select reliable and cost-effective Cloud providers. A preliminary evaluation is presented to show the effectiveness of our proposed trust model and selection approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

QoS plays a key role in evaluating a service or a service composition plan across clouds and data centers. Currently, the energy cost of a service's execution is not covered by the QoS framework, and a service's price is often fixed during its execution. However, energy consumption has a great contribution in determining the price of a cloud service. As a result, it is not reasonable if the price of a cloud service is calculated with a fixed energy consumption value, if part of a service's energy consumption could be saved during its execution. Taking advantage of the dynamic energy-Aware optimal technique, a QoS enhanced method for service computing is proposed, in this paper, through virtual machine (VM) scheduling. Technically, two typical QoS metrics, i.e., the price and the execution time are taken into consideration in our method. Moreover, our method consists of two dynamic optimal phases. The first optimal phase aims at dynamically benefiting a user with discount price by transparently migrating his or her task execution from a VM located at a server with high energy consumption to a low one. The second optimal phase aims at shortening task's execution time, through transparently migrating a task execution from a VM to another one located at a server with higher performance. Experimental evaluation upon large scale service computing across clouds demonstrates the validity of our method.