244 resultados para Distributed eLearning Centre (DeLC)
Resumo:
The move towards IT outsourcing is the first step towards an environment where compute infrastructure is treated as a service. In utility computing this IT service has to honor Service Level Agreements (SLA) in order to meet the desired Quality of Service (QoS) guarantees. Such an environment requires reliable services in order to maximize the utilization of the resources and to decrease the Total Cost of Ownership (TCO). Such reliability cannot come at the cost of resource duplication, since it increases the TCO of the data center and hence the cost per compute unit. We, in this paper, look into aspects of projecting impact of hardware failures on the SLAs and techniques required to take proactive recovery steps in case of a predicted failure. By maintaining health vectors of all hardware and system resources, we predict the failure probability of resources based on observed hardware errors/failure events, at runtime. This inturn influences an availability aware middleware to take proactive action (even before the application is affected in case the system and the application have low recoverability). The proposed framework has been prototyped on a system running HP-UX. Our offline analysis of the prediction system on hardware error logs indicate no more than 10% false positives. This work to the best of our knowledge is the first of its kind to perform an end-to-end analysis of the impact of a hardware fault on application SLAs, in a live system.
Resumo:
This work describes the parallelization of High Resolution flow solver on unstructured meshes, HIFUN-3D, an unstructured data based finite volume solver for 3-D Euler equations. For mesh partitioning, we use METIS, a software based on multilevel graph partitioning. The unstructured graph used for partitioning is associated with weights both on its vertices and edges. The data residing on every processor is split into four layers. Such a novel procedure of handling data helps in maintaining the effectiveness of the serial code. The communication of data across the processors is achieved by explicit message passing using the standard blocking mode feature of Message Passing Interface (MPI). The parallel code is tested on PACE++128 available in CFD Center
Resumo:
We report on the dielectric proper-ties of bismuth aluminate and gallate with Bi:AI(Ga) ratio of 1: 1 and 12:1 prepared at high temperature and ambient pressure. These compounds crystallize in a noncentrosymmetric body-centered cubic structure (space group 123) with a similar to 10.18 angstrom rather than in the perovskite structure.This cubic phase is related to the gamma-Bi2O3 structure which has the actual chemical formula Bi-24(3+) (Bi3+Bi5+)O40-delta. In the aluminates and gallates studied by us, the Al and Ga ions are distributed over the 24f and 2a sites. These compounds exibit ferroclectric hysteresis at room temperature with a weak polarization. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
The author presents adaptive control techniques for controlling the flow of real-time jobs from the peripheral processors (PPs) to the central processor (CP) of a distributed system with a star topology. He considers two classes of flow control mechanisms: (1) proportional control, where a certain proportion of the load offered to each PP is sent to the CP, and (2) threshold control, where there is a maximum rate at which each PP can send jobs to the CP. The problem is to obtain good algorithms for dynamically adjusting the control level at each PP in order to prevent overload of the CP, when the load offered by the PPs is unknown and varying. The author formulates the problem approximately as a standard system control problem in which the system has unknown parameters that are subject to change. Using well-known techniques (e.g., naive-feedback-controller and stochastic approximation techniques), he derives adaptive controls for the system control problem. He demonstrates the efficacy of these controls in the original problem by using the control algorithms in simulations of a queuing model of the CP and the load controls.
Resumo:
We discuss the key issues in the deployment of sparse sensor networks. The network monitors several environment parameters and is deployed in a semi-arid region for the benefit of small and marginal farmers. We begin by discussing the problems of an existing unreliable 1 sq km sparse network deployed in a village. The proposed solutions are implemented in a new cluster. The new cluster is a reliable 5 sq km network. Our contributions are two fold. Firstly, we describe a. novel methodology to deploy a sparse reliable data gathering sensor network and evaluate the ``safe distance'' or ``reliable'' distance between nodes using propagation models. Secondly, we address the problem of transporting data from rural aggregation servers to urban data centres. This paper tracks our steps in deploying a sensor network in a village,in India, trying to provide better diagnosis for better crop management. Keywords - Rural, Agriculture, CTRS, Sparse.
Resumo:
Erasure coding techniques are used to increase the reliability of distributed storage systems while minimizing storage overhead. Also of interest is minimization of the bandwidth required to repair the system following a node failure. In a recent paper, Wu et al. characterize the tradeoff between the repair bandwidth and the amount of data stored per node. They also prove the existence of regenerating codes that achieve this tradeoff. In this paper, we introduce Exact Regenerating Codes, which are regenerating codes possessing the additional property of being able to duplicate the data stored at a failed node. Such codes require low processing and communication overheads, making the system practical and easy to maintain. Explicit construction of exact regenerating codes is provided for the minimum bandwidth point on the storage-repair bandwidth tradeoff, relevant to distributed-mail-server applications. A sub-space based approach is provided and shown to yield necessary and sufficient conditions on a linear code to possess the exact regeneration property as well as prove the uniqueness of our construction. Also included in the paper, is an explicit construction of regenerating codes for the minimum storage point for parameters relevant to storage in peer-to-peer systems. This construction supports a variable number of nodes and can handle multiple, simultaneous node failures. All constructions given in the paper are of low complexity, requiring low field size in particular.
Resumo:
In this paper we address the problem of distributed transmission of functions of correlated sources over a fast fading multiple access channel (MAC). This is a basic building block in a hierarchical sensor network used in estimating a random field where the cluster head is interested only in estimating a function of the observations. The observations are transmitted to the cluster head through a fast fading MAC. We provide sufficient conditions for lossy transmission when the encoders and decoders are provided with partial information about the channel state. Furthermore signal side information maybe available at the encoders and the decoder. Various previous studies are shown as special cases. Efficient joint-source channel coding schemes are discussed for transmission of discrete and continuous alphabet sources to recover function values.
Resumo:
An important issue in the design of a distributed computing system (DCS) is the development of a suitable protocol. This paper presents an effort to systematize the protocol design procedure for a DCS. Protocol design and development can be divided into six phases: specification of the DCS, specification of protocol requirements, protocol design, specification and validation of the designed protocol, performance evaluation, and hardware/software implementation. This paper describes techniques for the second and third phases, while the first phase has been considered by the authors in their earlier work. Matrix and set theoretic based approaches are used for specification of a DCS and for specification of the protocol requirements. These two formal specification techniques form the basis of the development of a simple and straightforward procedure for the design of the protocol. The applicability of the above design procedure has been illustrated by considering an example of a computing system encountered on board a spacecraft. A Petri-net based approach has been adopted to model the protocol. The methodology developed in this paper can be used in other DCS applications.
Resumo:
A detailed characterization of interference power statistics in CDMA systems is of considerable practical and theoretical interest. Such a characterization for uplink inter-cell interference has been difficult because of transmit power control, randomness in the number of interfering mobile stations, and randomness in their locations. We develop a new method to model the uplink inter-cell interference power as a lognormal distribution, and show that it is an order of magnitude more accurate than the conventional Gaussian approximation even when the average number of mobile stations per cell is relatively large and even outperforms the moment-matched lognormal approximation considered in the literature. The proposed method determines the lognormal parameters by matching its moment generating function with a new approximation of the moment generating function for the inter-cell interference. The method is tractable and exploits the elegant spatial Poisson process theory. Using several numerical examples, the accuracy of the proposed method in modeling the probability distribution of inter-cell interference is verified for both small and large values of interference.
Resumo:
There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.
Resumo:
Represented by approximately 85 species, Hemidactylus is one of the most diverse and widely distributed genera of reptiles in the world. In the Indian subcontinent, this genus is represented by 28 species out of which at least 13 are endemic to this region. Here, we report the phylogeny of the Indian Hemidactylus geckos based on mitochondrial and nuclear DNA markers sequenced from multiple individuals of widely distributed as well as endemic congeners of India. Results indicate that a majority of the species distributed in India form a distinct clade whose members are largely confined to the Indian subcontinent thus representing a unique Indian radiation. The remaining Hemidactylus geckos of India belong to two other geographical clades representing the Southeast Asian and West-Asian arid zone species. Additionally, the three widely distributed, commensal species (H. brookii, H. frenatus and H. flaviviridis) are nested within the Indian radiation suggesting their Indian origin. Dispersal-vicariance analysis also supports their Indian origin and subsequent dispersal out-of-India into West-Asian arid zone and Southeast Asia. Thus, Indian subcontinent has served as an important arena for diversification amongst the Hemidactylus geckos and in the evolution and spread of its commensal geckos. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
In a storage system where individual storage nodes are prone to failure, the redundant storage of data in a distributed manner across multiple nodes is a must to ensure reliability. Reed-Solomon codes possess the reconstruction property under which the stored data can be recovered by connecting to any k of the n nodes in the network across which data is dispersed. This property can be shown to lead to vastly improved network reliability over simple replication schemes. Also of interest in such storage systems is the minimization of the repair bandwidth, i.e., the amount of data needed to be downloaded from the network in order to repair a single failed node. Reed-Solomon codes perform poorly here as they require the entire data to be downloaded. Regenerating codes are a new class of codes which minimize the repair bandwidth while retaining the reconstruction property. This paper provides an overview of regenerating codes including a discussion on the explicit construction of optimum codes.
Resumo:
A new language concept for high-level distributed programming is proposed. Programs are organised as a collection of concurrently executing processes. Some of these processes, referred to as liaison processes, have a monitor-like structure and contain ports which may be invoked by other processes for the purposes of synchronisation and communication. Synchronisation is achieved by conditional activation of ports and also through port control constructs which may directly specify the execution ordering of ports. These constructs implement a path-expression-like mechanism for synchronisation and are also equipped with options to provide conditional, non-deterministic and priority ordering of ports. The usefulness and expressive power of the proposed concepts are illustrated through solutions of several representative programming problems. Some implementation issues are also considered.
Resumo:
In this paper, we propose a novel and efficient algorithm for modelling sub-65 nm clock interconnect-networks in the presence of process variation. We develop a method for delay analysis of interconnects considering the impact of Gaussian metal process variations. The resistance and capacitance of a distributed RC line are expressed as correlated Gaussian random variables which are then used to compute the standard deviation of delay Probability Distribution Function (PDF) at all nodes in the interconnect network. Main objective is to find delay PDF at a cheaper cost. Convergence of this approach is in probability distribution but not in mean of delay. We validate our approach against SPICE based Monte Carlo simulations while the current method entails significantly lower computational cost.