817 resultados para Distributed Systems
Resumo:
The scheduling problem in distributed data-intensive computing environments has become an active research topic due to the tremendous growth in grid and cloud computing environments. As an innovative distributed intelligent paradigm, swarm intelligence provides a novel approach to solving these potentially intractable problems. In this paper, we formulate the scheduling problem for work-flow applications with security constraints in distributed data-intensive computing environments and present a novel security constraint model. Several meta-heuristic adaptations to the particle swarm optimization algorithm are introduced to deal with the formulation of efficient schedules. A variable neighborhood particle swarm optimization algorithm is compared with a multi-start particle swarm optimization and multi-start genetic algorithm. Experimental results illustrate that population based meta-heuristics approaches usually provide a good balance between global exploration and local exploitation and their feasibility and effectiveness for scheduling work-flow applications. © 2010 Elsevier Inc. All rights reserved.
Resumo:
This paper elaborates on the ergodic capacity of fixed-gain amplify-and-forward (AF) dual-hop systems, which have recently attracted considerable research and industry interest. In particular, two novel capacity bounds that allow for fast and efficient computation and apply for nonidentically distributed hops are derived. More importantly, they are generic since they apply to a wide range of popular fading channel models. Specifically, the proposed upper bound applies to Nakagami-m, Weibull, and generalized-K fading channels, whereas the proposed lower bound is more general and applies to Rician fading channels. Moreover, it is explicitly demonstrated that the proposed lower and upper bounds become asymptotically exact in the high signal-to-noise ratio (SNR) regime. Based on our analytical expressions and numerical results, we gain valuable insights into the impact of model parameters on the capacity of fixed-gain AF dual-hop relaying systems. © 2011 IEEE.
Resumo:
We study the effects of post-selection measurements on both the non-classicality of the state of a mechanical oscillator and the entanglement between two mechanical systems that are part of a distributed optomechanical network. We address the cases of both Gaussian and non-Gaussian measurements, identifying in which cases simple photon counting and Geiger-like measurements are effective in distilling a strongly non-classical mechanical state and enhancing the purely mechanical entanglement between two elements of the network.
Resumo:
We consider the distribution of entanglement from a multimode optical driving source to a network of remote and independent optomechanical systems. By focusing on the tripartite case, we analyse the effects that the features of the optical input states have on the degree and sharing structure of the distributed, fully mechanical, entanglement. This study, which is conducted looking at the mechanical steady state, highlights the structure of the entanglement distributed among the nodes and determines the relative efficiency between bipartite and tripartite entanglement transfer. We discuss a few open points, some of which are directed towards the bypassing of such limitations.
Resumo:
We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that the following process continues for up to n rounds where n is the total number of nodes initially in the network: the adversary deletesan arbitrary node from the network, then the network responds by quickly adding a small number of new edges.
We present a distributed data structure that ensures two key properties. First, the diameter of the network is never more than O(log Delta) times its original diameter, where Delta is the maximum degree of the network initially. We note that for many peer-to-peer systems, Delta is polylogarithmic, so the diameter increase would be a O(loglog n) multiplicative factor. Second, the degree of any node never increases by more than 3 over its original degree. Our data structure is fully distributed, has O(1) latency per round and requires each node to send and receive O(1) messages per round. The data structure requires an initial setup phase that has latency equal to the diameter of the original network, and requires, with high probability, each node v to send O(log n) messages along every edge incident to v. Our approach is orthogonal and complementary to traditional topology-based approaches to defending against attack.
Resumo:
The exponential growth in user and application data entails new means for providing fault tolerance and protection against data loss. High Performance Com- puting (HPC) storage systems, which are at the forefront of handling the data del- uge, typically employ hardware RAID at the backend. However, such solutions are costly, do not ensure end-to-end data integrity, and can become a bottleneck during data reconstruction. In this paper, we design an innovative solution to achieve a flex- ible, fault-tolerant, and high-performance RAID-6 solution for a parallel file system (PFS). Our system utilizes low-cost, strategically placed GPUs — both on the client and server sides — to accelerate parity computation. In contrast to hardware-based approaches, we provide full control over the size, length and location of a RAID array on a per file basis, end-to-end data integrity checking, and parallelization of RAID array reconstruction. We have deployed our system in conjunction with the widely-used Lustre PFS, and show that our approach is feasible and imposes ac- ceptable overhead.
Resumo:
Inter-component communication has always been of great importance in the design of software architectures and connectors have been considered as first-class entities in many approaches [1][2][3]. We present a novel architectural style that is derived from the well-established domain of computer networks. The style adopts the inter-component communication protocol in a novel way that allows large scale software reuse. It mainly targets real-time, distributed, concurrent, and heterogeneous systems.
Resumo:
This paper investigates the uplink achievable rates of massive multiple-input multiple-output (MIMO) antenna systems in Ricean fading channels, using maximal-ratio combining (MRC) and zero-forcing (ZF) receivers, assuming perfect and imperfect channel state information (CSI). In contrast to previous relevant works, the fast fading MIMO channel matrix is assumed to have an arbitrary-rank deterministic component as well as a Rayleigh-distributed random component. We derive tractable expressions for the achievable uplink rate in the large-antenna limit, along with approximating results that hold for any finite number of antennas. Based on these analytical results, we obtain the scaling law that the users' transmit power should satisfy, while maintaining a desirable quality of service. In particular, it is found that regardless of the Ricean K-factor, in the case of perfect CSI, the approximations converge to the same constant value as the exact results, as the number of base station antennas, M, grows large, while the transmit power of each user can be scaled down proportionally to 1/M. If CSI is estimated with uncertainty, the same result holds true but only when the Ricean K-factor is non-zero. Otherwise, if the channel experiences Rayleigh fading, we can only cut the transmit power of each user proportionally to 1/√M. In addition, we show that with an increasing Ricean K-factor, the uplink rates will converge to fixed values for both MRC and ZF receivers.
Resumo:
Renewable energy is high on international and national agendas. Currently, grid-connected photovoltaic (PV) systems are a popular technology to convert solar energy into electricity. Existing PV panels have a relatively low and varying output voltage so that the converter installed between the PVs and the grid should be equipped with high step-up and versatile control capabilities. In addition, the output current of PV systems is rich in harmonics which affect the power quality of the grid. In this paper, a new multi-stage hysteresis control of a step-up DC-DC converter is proposed for integrating PVs into a single-phase power grid. The proposed circuitry and control method is experimentally validated by testing on a 600W prototype converter. The developed technology has significant economic implications and could be applied to many distributed generation (DG) systems, especially for the developing countries which have a large number of small PVs connected to their single-phase distribution network.
Resumo:
Interaction of a stream of high-energy electrons with the background plasma plays an important role in the astrophysical phenomena such as interplanetary and stellar bow shock and Earth's foreshock emission. It is not yet fully understood how electrostatic solitary waves are produced at the bow shock. Interestingly, a population of energetic suprathermal electrons were also found to exist in those environments. Previously, we have studied the properties of negative electrostatic potential solitary structures exist in such a plasma with excess suprathermal electrons. In the present study, we investigate the existence conditions and propagation properties of electron-acoustic solitary waves in a plasma consisting of an electron beam fluid, a cold electron fluid, and hot suprathermal electrons modeled by a kappa-distribution function. The Sagdeev pseudopotential method was used to investigate the occurrence of stationary-profile solitary waves. We have determined how the electron-acoustic soliton characteristics depend on the electron beam parameters. It is found that the existence domain for solitons becomes narrower with an increase in the suprathermality of hot electrons, increasing the beam speed, decreasing the beam-to-cold electron population ratio. These results lead to a better understanding of the formation of electron-acoustic solitary waves observed in those space plasma systems characterized by kappa-distributed electrons and inertial drifting (beam) electrons.
Resumo:
As the emphasis on initiatives that can improve environmental efficiency while simultaneously maintaining economic viability has escalated in recent years, attention has turned to more radical concepts of operation. In particular, the cruiser–feeder concept has shown potential for a new generation, environmentally friendly, air-transport system to alleviate the growing pressure on the passenger air-transportation network. However, a full evaluation of realizable benefits is needed to determine how the design and operation of potential feeder-aircraft configurations impact on the feasibility of the overall concept. This paper presents an analysis of a cruiser–feeder concept, in which fuel is transferred between the feeder and the cruiser in an aerial-refueling configuration to extend range while reducing cruiser weight, compared against the effects of escalating existing technology levels while retaining the existing passenger levels. Up to 14% fuel-burn and 12% operating-cost savings can be achieved when compared to a similar technology-level aircraft concept without aerial refueling, representing up to 26% in fuel burn and 25% in total operating cost over the existing operational model at today’s standard fleet technology and performance. However, these potential savings are not uniformly distributed across the network, and the system is highly sensitive to the routes serviced, with reductions in revenue-generation potential observed across the network for aerial-refueling operations due to reductions in passenger revenue.
Resumo:
In this paper, our previous work on Principal Component Analysis (PCA) based fault detection method is extended to the dynamic monitoring and detection of loss-of-main in power systems using wide-area synchrophasor measurements. In the previous work, a static PCA model was built and verified to be capable of detecting and extracting system faulty events; however the false alarm rate is high. To address this problem, this paper uses a well-known ‘time lag shift’ method to include dynamic behavior of the PCA model based on the synchronized measurements from Phasor Measurement Units (PMU), which is named as the Dynamic Principal Component Analysis (DPCA). Compared with the static PCA approach as well as the traditional passive mechanisms of loss-of-main detection, the proposed DPCA procedure describes how the synchrophasors are linearly
auto- and cross-correlated, based on conducting the singular value decomposition on the augmented time lagged synchrophasor matrix. Similar to the static PCA method, two statistics, namely T2 and Q with confidence limits are calculated to form intuitive charts for engineers or operators to monitor the loss-of-main situation in real time. The effectiveness of the proposed methodology is evaluated on the loss-of-main monitoring of a real system, where the historic data are recorded from PMUs installed in several locations in the UK/Ireland power system.