52 resultados para minimalist hardware architecture


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The success of cloud computing makes an increasing number of real-time applications such as signal processing and weather forecasting run in the cloud. Meanwhile, scheduling for real-time tasks is playing an essential role for a cloud provider to maintain its quality of service and enhance the system's performance. In this paper, we devise a novel agent-based scheduling mechanism in cloud computing environment to allocate real-time tasks and dynamically provision resources. In contrast to traditional contract net protocols, we employ a bidirectional announcement-bidding mechanism and the collaborative process consists of three phases, i.e., basic matching phase, forward announcement-bidding phase and backward announcement-bidding phase. Moreover, the elasticity is sufficiently considered while scheduling by dynamically adding virtual machines to improve schedulability. Furthermore, we design calculation rules of the bidding values in both forward and backward announcement-bidding phases and two heuristics for selecting contractors. On the basis of the bidirectional announcement-bidding mechanism, we propose an agent-based dynamic scheduling algorithm named ANGEL for real-time, independent and aperiodic tasks in clouds. Extensive experiments are conducted on CloudSim platform by injecting random synthetic workloads and the workloads from the last version of the Google cloud tracelogs to evaluate the performance of our ANGEL. The experimental results indicate that ANGEL can efficiently solve the real-time task scheduling problem in virtualized clouds.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Data sharing has never been easier with the advances of cloud computing, and an accurate analysis on the shared data provides an array of benefits to both the society and individuals. Data sharing with a large number of participants must take into account several issues, including efficiency, data integrity and privacy of data owner. Ring signature is a promising candidate to construct an anonymous and authentic data sharing system. It allows a data owner to anonymously authenticate his data which can be put into the cloud for storage or analysis purpose. Yet the costly certificate verification in the traditional public key infrastructure (PKI) setting becomes a bottleneck for this solution to be scalable. Identity-based (ID-based) ring signature, which eliminates the process of certificate verification, can be used instead. In this paper, we further enhance the security of ID-based ring signature by providing forward security: If a secret key of any user has been compromised, all previous generated signatures that include this user still remain valid. This property is especially important to any large scale data sharing system, as it is impossible to ask all data owners to re-authenticate their data even if a secret key of one single user has been compromised. We provide a concrete and efficient instantiation of our scheme, prove its security and provide an implementation to show its practicality.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As clouds have been deployed widely in various fields, the reliability and availability of clouds become the major concern of cloud service providers and users. Thereby, fault tolerance in clouds receives a great deal of attention in both industry and academia, especially for real-time applications due to their safety critical nature. Large amounts of researches have been conducted to realize fault tolerance in distributed systems, among which fault-tolerant scheduling plays a significant role. However, few researches on the fault-tolerant scheduling study the virtualization and the elasticity, two key features of clouds, sufficiently. To address this issue, this paper presents a fault-tolerant mechanism which extends the primary-backup model to incorporate the features of clouds. Meanwhile, for the first time, we propose an elastic resource provisioning mechanism in the fault-tolerant context to improve the resource utilization. On the basis of the fault-tolerant mechanism and the elastic resource provisioning mechanism, we design novel fault-tolerant elastic scheduling algorithms for real-time tasks in clouds named FESTAL, aiming at achieving both fault tolerance and high resource utilization in clouds. Extensive experiments injecting with random synthetic workloads as well as the workload from the latest version of the Google cloud tracelogs are conducted by CloudSim to compare FESTAL with three baseline algorithms, i.e., Non-M igration-FESTAL (NMFESTAL), Non-Overlapping-FESTAL (NOFESTAL), and Elastic First Fit (EFF). The experimental results demonstrate that FESTAL is able to effectively enhance the performance of virtualized clouds.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Data deduplication is a technique for eliminating duplicate copies of data, and has been widely used in cloud storage to reduce storage space and upload bandwidth. However, there is only one copy for each file stored in cloud even if such a file is owned by a huge number of users. As a result, deduplication system improves storage utilization while reducing reliability. Furthermore, the challenge of privacy for sensitive data also arises when they are outsourced by users to cloud. Aiming to address the above security challenges, this paper makes the first attempt to formalize the notion of distributed reliable deduplication system. We propose new distributed deduplication systems with higher reliability in which the data chunks are distributed across multiple cloud servers. The security requirements of data confidentiality and tag consistency are also achieved by introducing a deterministic secret sharing scheme in distributed storage systems, instead of using convergent encryption as in previous deduplication systems. Security analysis demonstrates that our deduplication systems are secure in terms of the definitions specified in the proposed security model. As a proof of concept, we implement the proposed systems and demonstrate that the incurred overhead is very limited in realistic environments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The notion of database outsourcing enables the data owner to delegate the database management to a cloud service provider (CSP) that provides various database services to different users. Recently, plenty of research work has been done on the primitive of outsourced database. However, it seems that no existing solutions can perfectly support the properties of both correctness and completeness for the query results, especially in the case when the dishonest CSP intentionally returns an empty set for the query request of the user. In this paper, we propose a new verifiable auditing scheme for outsourced database, which can simultaneously achieve the correctness and completeness of search results even if the dishonest CSP purposely returns an empty set. Furthermore, we can prove that our construction can achieve the desired security properties even in the encrypted outsourced database. Besides, the proposed scheme can be extended to support the dynamic database setting by incorporating the notion of verifiable database with updates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Big data is an emerging hot research topic due to its pervasive application in human society, such as government, climate, finance, and science. Currently, most research work on big data falls in data mining, machine learning, and data analysis. However, these amazing top-level killer applications would not be possible without the underneath support of networking due to their extremely large volume and computing complexity, especially when real-time or near-real-time applications are demanded.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Big data analytics has shown great potential in optimizing operations, making decisions, spotting business trends, preventing threats, and capitalizing on new sources of revenues in various fields such as manufacturing, healthcare, finance, insurance, and retail. The management of various networks has become inefficient and difficult because of their high complexities and interdependencies. Big data, in forms of device logs, software logs, media content, and sensed data, provide rich information and facilitate a fundamentally different and novel approach to explore, design, and develop reliable and scalable networks. This Special Issue covers the most recent research results that address challenges of big data for networking. We received 45 submissions, and ultimately nine high quality papers, organized into two groups, have been selected for inclusion in this Special Issue.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the advent of semiconductor process and EDA tools technology, IC designers can integrate more functions. However, to reduce the demand of time-to-market and tackle the increasing complexity of SoC, the need of fast prototyping and testing is growing. Taking advantage of deep submicron technology, modern FPGAs provide a fast and low-cost prototyping with large logic resources and high performance. So the hardware is mapped onto an emulation platform based on FPGA that mimics the behaviour of SOC. In this paper we use FPGA as a system on chip which is then used for image compression by 2-D DCT respectively and proposed SoC for image compression using soft core Microblaze. The JPEG standard defines compression techniques for image data. As a consequence, it allows to store and transfer image data with considerably reduced demand for storage space and bandwidth. From the four processes provided in the JPEG standard, only one, the baseline process is widely used. Proposed SoC for JPEG compression has been implemented on FPGA Spartan-6 SP605 evaluation board using Xilinx platform studio, because field programmable gate array have reconfigurable hardware architecture. Hence the JPEG image with high speed and reduced size can be obtained at low risk and low power consumption of about 0.699W. The proposed SoC for image compression is evaluated at 83.33MHz on Xilinx Spartan-6 FPGA.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

 With the rising demands on cloud services, the electricity consumption has been increasing drastically as the main operational expenditure (OPEX) to data center providers. The geographical heterogeneity of electricity prices motivates us to study the task placement problem over geo-distributed data centers. We exploit the dynamic frequency scaling technique and formulate an optimization problem that minimizes OPEX while guaranteeing the quality-of-service, i.e., the expected response time of tasks. Furthermore, an optimal solution is discovered for this formulated problem. The experimental results show that our proposal achieves much higher cost-efficiency than the traditional resizing scheme, i.e., by activating/deactivating certain servers in data centers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Given a set of events and a set of robots, the dispatch problem is to allocate one robot for each event to visit it. In a single round, each robot may be allowed to visit only one event (matching dispatch), or several events in a sequence (sequence dispatch). In a distributed setting, each event is discovered by a sensor and reported to a robot. Here, we present novel algorithms aimed at overcoming the shortcomings of several existing solutions. We propose pairwise distance based matching algorithm (PDM) to eliminate long edges by pairwise exchanges between matching pairs. Our sequence dispatch algorithm (SQD) iteratively finds the closest event-robot pair, includes the event in dispatch schedule of the selected robot and updates its position accordingly. When event-robot distances are multiplied by robot resistance (inverse of the remaining energy), the corresponding energy-balanced variants are obtained. We also present generalizations which handle multiple visits and timing constraints. Our localized algorithm MAD is based on information mesh infrastructure and local auctions within the robot network for obtaining the optimal dispatch schedule for each robot. The simulations conducted confirm the advantages of our algorithms over other existing solutions in terms of average robot-event distance and lifetime.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years, we have witnessed substantial exploitation of real-time streaming applications, such as video surveillance system on road crosses of a city. So far, real world applications mainly rely on the traditional well-known client-server and peer-to-peer schemes as the fundamental mechanism for communication. However, due to the limited resources on each terminal device in the applications, these two schemes cannot well leverage the processing capability between the source and destination of the video traffic, which leads to limited streaming services. For this reason, many QoS sensitive application cannot be supported in the real world. In this paper, we are motivated to address this problem by proposing a novel multi-server based framework. In this framework, multiple servers collaborate with each other to form a virtual server (also called cloud-server), and provide high-quality services such as real-time streams delivery and storage. Based on this framework, we further introduce a (1-?) approximation algorithm to solve the NP-complete "maximum services"(MS) problem with the intention of handling large number of streaming flows originated by networks and maximizing the total number of services. Moreover, in order to backup the streaming data for later retrieval, based on the framework, an algorithm is proposed to implement backups and maximize streaming flows simultaneously. We conduct a series of experiments based on simulations to evaluate the performance of the newly proposed framework. We also compare our scheme to several traditional solutions. The results suggest that our proposed scheme significantly outperforms the traditional solutions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

While SQL injection attacks have been plaguing web application systems for years, the possibility of them affecting RFID systems was only identified very recently. However, very little work exists to mitigate this serious security threat to RFID-enabled enterprise systems. In this paper, we propose a policy-based SQLIA detection and prevention method for RFID systems. The proposed technique creates data validation and sanitization policies during content analysis and enforces those policies during runtime monitoring. We tested all possible types of dynamic queries that may be generated in RFID systems with all possible types of attacks that can be mounted on those systems. We present an analysis and evaluation of the proposed approach to demonstrate the effectiveness of the proposed approach in mitigating SQLIA.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the increasing popularity of utility-oriented computing where the resources are traded as services, efficient management of quality of service (QoS) has become increasingly significant to both service consumers and service providers. In the context of distributed multimedia content adaptation deployment on service-oriented computing, how to ensure the stringent QoS requirements of the content adaptation is a significant and immediate challenge. However, QoS guarantees in the distributed multimedia content adaptation deployment on service-oriented platform context have not been accorded the attention it deserves. In this paper, we address this problem. We formulate the SLA management for distributed multimedia content adaptation deployment on service-oriented computing as an integer programming problem. We propose an SLA management framework that enables the service provider to determine deliverable QoS before settling SLA with potential service consumers to optimize QoS guarantees. We analyzed the performance of the proposed strategy under various conditions in terms of the SLA success rate, rejection rate and impact of the resource data errors on potential violation of the agreed upon SLA. We also compared the proposed SLA management framework with a baseline approach in which the distributed multimedia content adaptation is deployed on a service-oriented platform without SLA consideration. The results of the experiments show that the proposed SLA management framework substantially outperforms the baseline approach confirming that SLA management is a core requirement for the deployment of distributed multimedia content adaptation on service-oriented systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cloud computing is proposed as an open and promising computing paradigm where customers can deploy and utilize IT services in a pay-as-you-go fashion while saving huge capital investment in their own IT infrastructure. Due to the openness and virtualization, various malicious service providers may exist in these cloud environments, and some of them may record service data from a customer and then collectively deduce the customer's private information without permission. Therefore, from the perspective of cloud customers, it is essential to take certain technical actions to protect their privacy at client side. Noise obfuscation is an effective approach in this regard by utilizing noise data. For instance, noise service requests can be generated and injected into real customer service requests so that malicious service providers would not be able to distinguish which requests are real ones if these requests' occurrence probabilities are about the same, and consequently related customer privacy can be protected. Currently, existing representative noise generation strategies have not considered possible fluctuations of occurrence probabilities. In this case, the probability fluctuation could not be concealed by existing noise generation strategies, and it is a serious risk for the customer's privacy. To address this probability fluctuation privacy risk, we systematically develop a novel time-series pattern based noise generation strategy for privacy protection on cloud. First, we analyze this privacy risk and present a novel cluster based algorithm to generate time intervals dynamically. Then, based on these time intervals, we investigate corresponding probability fluctuations and propose a novel time-series pattern based forecasting algorithm. Lastly, based on the forecasting algorithm, our novel noise generation strategy can be presented to withstand the probability fluctuation privacy risk. The simulation evaluation demonstrates that our strategy can significantly improve the effectiveness of such cloud privacy protection to withstand the probability fluctuation privacy risk.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Two-way relaying systems are known to be capable of providing higher spectral efficiency compared with one-way relaying systems. However, the channel estimation problem for two-way relaying systems becomes more complicated. In this paper, we propose a superimposed channel training scheme for two-way MIMO relay communication systems, where the individ-ual channel information for users-relay and relay-users links are estimated. The optimal structure of the source and relay training sequences are derived when the mean-squared error (MSE) of channel estimation is minimized. We also optimize the power allocation between the source and relay training sequences to improve the performance of the algorithm. Numerical examples are shown to demonstrate the performance of the proposed channel training algorithm.