42 resultados para Distributed system
em Indian Institute of Science - Bangalore - Índia
Resumo:
The author presents adaptive control techniques for controlling the flow of real-time jobs from the peripheral processors (PPs) to the central processor (CP) of a distributed system with a star topology. He considers two classes of flow control mechanisms: (1) proportional control, where a certain proportion of the load offered to each PP is sent to the CP, and (2) threshold control, where there is a maximum rate at which each PP can send jobs to the CP. The problem is to obtain good algorithms for dynamically adjusting the control level at each PP in order to prevent overload of the CP, when the load offered by the PPs is unknown and varying. The author formulates the problem approximately as a standard system control problem in which the system has unknown parameters that are subject to change. Using well-known techniques (e.g., naive-feedback-controller and stochastic approximation techniques), he derives adaptive controls for the system control problem. He demonstrates the efficacy of these controls in the original problem by using the control algorithms in simulations of a queuing model of the CP and the load controls.
Resumo:
Onboard spacecraft computing system is a case of a functionally distributed system that requires continuous interaction among the nodes to control the operations at different nodes. A simple and reliable protocol is desired for such an application. This paper discusses a formal approach to specify the computing system with respect to some important issues encountered in the design and development of a protocol for the onboard distributed system. The issues considered in this paper are concurrency, exclusiveness and sequencing relationships among the various processes at different nodes. A 6-tuple model is developed for the precise specification of the system. The model also enables us to check the consistency of specification and deadlock caused due to improper specification. An example is given to illustrate the use of the proposed methodology for a typical spacecraft configuration. Although the theory is motivated by a specific application the same may be applied to other distributed computing system such as those encountered in process control industries, power plant control and other similar environments.
Resumo:
Distributed system has quite a lot of servers to attain increased availability of service and for fault tolerance. Balancing the load among these servers is an important task to achieve better performance. There are various hardware and software based load balancing solutions available. However there is always an overhead on Servers and the Load Balancer while communicating with each other and sharing their availability and the current load status information. Load balancer is always busy in listening to clients' request and redirecting them. It also needs to collect the servers' availability status frequently, to keep itself up-to-date. Servers are busy in not only providing service to clients but also sharing their current load information with load balancing algorithms. In this paper we have proposed and discussed the concept and system model for software based load balancer along with Availability-Checker and Load Reporters (LB-ACLRs) which reduces the overhead on server and the load balancer. We have also described the architectural components with their roles and responsibilities. We have presented a detailed analysis to show how our proposed Availability Checker significantly increases the performance of the system.
Resumo:
Reduction of the execution time of a job through equitable distribution of work load among the processors in a distributed system is the goal of load balancing. Performance of static and dynamic load balancing algorithms for the extended hypercube, is discussed. Threshold algorithms are very well-known algorithms for dynamic load balancing in distributed systems. An extension of the threshold algorithm, called the multilevel threshold algorithm, has been proposed. The hierarchical interconnection network of the extended hypercube is suitable for implementing the proposed algorithm. The new algorithm has been implemented on a transputer-based system and the performance of the algorithm for an extended hypercube is compared with those for mesh and binary hypercube networks
Resumo:
The steady state throughput performance of distributed applications deployed in switched networks in presence of end-system bottlenecks is studied in this paper. The effect of various limitations at an end-system is modelled as an equivalent transmission capacity limitation. A class of distributed applications is characterised by a static traffic distribution matrix that determines the communication between various components of the application. It is found that uniqueness of steady state throughputs depends only on the traffic distribution matrix and that some applications (e.g., broadcast applications) can yield non-unique values for the steady state component throughputs. For a given switch capacity, with traffic distribution that yield fair unique throughputs, the trade-off between the end-system capacity and the number of application components is brought out. With a proposed distributed rate control, it has been illustrated that it is possible to have unique solution for certain traffic distributions which is otherwise impossible. Also, by proper selection of rate control parameters, various throughput performance objectives can be realised.
Resumo:
This paper primarily intends to develop a GIS (geographical information system)-based data mining approach for optimally selecting the locations and determining installed capacities for setting up distributed biomass power generation systems in the context of decentralized energy planning for rural regions. The optimal locations within a cluster of villages are obtained by matching the installed capacity needed with the demand for power, minimizing the cost of transportation of biomass from dispersed sources to power generation system, and cost of distribution of electricity from the power generation system to demand centers or villages. The methodology was validated by using it for developing an optimal plan for implementing distributed biomass-based power systems for meeting the rural electricity needs of Tumkur district in India consisting of 2700 villages. The approach uses a k-medoid clustering algorithm to divide the total region into clusters of villages and locate biomass power generation systems at the medoids. The optimal value of k is determined iteratively by running the algorithm for the entire search space for different values of k along with demand-supply matching constraints. The optimal value of the k is chosen such that it minimizes the total cost of system installation, costs of transportation of biomass, and transmission and distribution. A smaller region, consisting of 293 villages was selected to study the sensitivity of the results to varying demand and supply parameters. The results of clustering are represented on a GIS map for the region.
Resumo:
This paper presents a multi-class support vector machine (SVMs) approach for locating and diagnosing faults in electric power distribution feeders with the penetration of Distributed Generations (DGs). The proposed approach is based on the three phase voltage and current measurements which are available at all the sources i.e. substation and at the connection points of DG. To illustrate the proposed methodology, a practical distribution feeder emanating from 132/11kV-grid substation in India with loads and suitable number of DGs at different locations is considered. To show the effectiveness of the proposed methodology, practical situations in distribution systems (DS) such as all types of faults with a wide range of varying fault locations, source short circuit (SSC) levels and fault impedances are considered for studies. The proposed fault location scheme is capable of accurately identify the fault type, location of faulted feeder section and the fault impedance. The results demonstrate the feasibility of applying the proposed method in practical in smart grid distribution automation (DA) for fault diagnosis.
Resumo:
Concurrency control (CC) algorithms are important in distributed database systems to ensure consistency of the database. A number of such algorithms are available in the literature. The issue of performance evaluation of these algorithms has been recognized to be important. However, only a few studies have been carried out towards this. This paper deals with the performance evaluation of a CC algorithm proposed by Rosenkrantz et al. through a detailed simulation study. In doing so, the algorithm has been modified so that it can, within itself, take care of the redundancy in the database. The influences of various system parameters and the transaction profile on the response time and on the degree of conflict are considered. The entire study has been carried out using the programming language SIMULA on a DEC-1090 system.
Resumo:
Input-output stability of linear-distributed parameter systems of arbitrary order and type in the presence of a distributed controller is analyzed by extending the concept of dissipativeness, with certain modifications, to such systems. The approach is applicable to systems with homogeneous or homogenizable boundary conditions. It also helps in generating a Liapunov functional to assess asymptotic stability of the system.
Resumo:
We address the problem of distributed space-time coding with reduced decoding complexity for wireless relay network. The transmission protocol follows a two-hop model wherein the source transmits a vector in the first hop and in the second hop the relays transmit a vector, which is a transformation of the received vector by a relay-specific unitary transformation. Design criteria is derived for this system model and codes are proposed that achieve full diversity. For a fixed number of relay nodes, the general system model considered in this paper admits code constructions with lower decoding complexity compared to codes based on some earlier system models.
Resumo:
The move towards IT outsourcing is the first step towards an environment where compute infrastructure is treated as a service. In utility computing this IT service has to honor Service Level Agreements (SLA) in order to meet the desired Quality of Service (QoS) guarantees. Such an environment requires reliable services in order to maximize the utilization of the resources and to decrease the Total Cost of Ownership (TCO). Such reliability cannot come at the cost of resource duplication, since it increases the TCO of the data center and hence the cost per compute unit. We, in this paper, look into aspects of projecting impact of hardware failures on the SLAs and techniques required to take proactive recovery steps in case of a predicted failure. By maintaining health vectors of all hardware and system resources, we predict the failure probability of resources based on observed hardware errors/failure events, at runtime. This inturn influences an availability aware middleware to take proactive action (even before the application is affected in case the system and the application have low recoverability). The proposed framework has been prototyped on a system running HP-UX. Our offline analysis of the prediction system on hardware error logs indicate no more than 10% false positives. This work to the best of our knowledge is the first of its kind to perform an end-to-end analysis of the impact of a hardware fault on application SLAs, in a live system.
Resumo:
Erasure coding techniques are used to increase the reliability of distributed storage systems while minimizing storage overhead. Also of interest is minimization of the bandwidth required to repair the system following a node failure. In a recent paper, Wu et al. characterize the tradeoff between the repair bandwidth and the amount of data stored per node. They also prove the existence of regenerating codes that achieve this tradeoff. In this paper, we introduce Exact Regenerating Codes, which are regenerating codes possessing the additional property of being able to duplicate the data stored at a failed node. Such codes require low processing and communication overheads, making the system practical and easy to maintain. Explicit construction of exact regenerating codes is provided for the minimum bandwidth point on the storage-repair bandwidth tradeoff, relevant to distributed-mail-server applications. A sub-space based approach is provided and shown to yield necessary and sufficient conditions on a linear code to possess the exact regeneration property as well as prove the uniqueness of our construction. Also included in the paper, is an explicit construction of regenerating codes for the minimum storage point for parameters relevant to storage in peer-to-peer systems. This construction supports a variable number of nodes and can handle multiple, simultaneous node failures. All constructions given in the paper are of low complexity, requiring low field size in particular.
Resumo:
An important issue in the design of a distributed computing system (DCS) is the development of a suitable protocol. This paper presents an effort to systematize the protocol design procedure for a DCS. Protocol design and development can be divided into six phases: specification of the DCS, specification of protocol requirements, protocol design, specification and validation of the designed protocol, performance evaluation, and hardware/software implementation. This paper describes techniques for the second and third phases, while the first phase has been considered by the authors in their earlier work. Matrix and set theoretic based approaches are used for specification of a DCS and for specification of the protocol requirements. These two formal specification techniques form the basis of the development of a simple and straightforward procedure for the design of the protocol. The applicability of the above design procedure has been illustrated by considering an example of a computing system encountered on board a spacecraft. A Petri-net based approach has been adopted to model the protocol. The methodology developed in this paper can be used in other DCS applications.
Resumo:
There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.