267 resultados para distributed shared memory
Resumo:
In this paper we address the problem of distributed transmission of functions of correlated sources over a fast fading multiple access channel (MAC). This is a basic building block in a hierarchical sensor network used in estimating a random field where the cluster head is interested only in estimating a function of the observations. The observations are transmitted to the cluster head through a fast fading MAC. We provide sufficient conditions for lossy transmission when the encoders and decoders are provided with partial information about the channel state. Furthermore signal side information maybe available at the encoders and the decoder. Various previous studies are shown as special cases. Efficient joint-source channel coding schemes are discussed for transmission of discrete and continuous alphabet sources to recover function values.
Resumo:
Template matching is concerned with measuring the similarity between patterns of two objects. This paper proposes a memory-based reasoning approach for pattern recognition of binary images with a large template set. It seems that memory-based reasoning intrinsically requires a large database. Moreover, some binary image recognition problems inherently need large template sets, such as the recognition of Chinese characters which needs thousands of templates. The proposed algorithm is based on the Connection Machine, which is the most massively parallel machine to date, using a multiresolution method to search for the matching template. The approach uses the pyramid data structure for the multiresolution representation of templates and the input image pattern. For a given binary image it scans the template pyramid searching the match. A binary image of N × N pixels can be matched in O(log N) time complexity by our algorithm and is independent of the number of templates. Implementation of the proposed scheme is described in detail.
Resumo:
An important issue in the design of a distributed computing system (DCS) is the development of a suitable protocol. This paper presents an effort to systematize the protocol design procedure for a DCS. Protocol design and development can be divided into six phases: specification of the DCS, specification of protocol requirements, protocol design, specification and validation of the designed protocol, performance evaluation, and hardware/software implementation. This paper describes techniques for the second and third phases, while the first phase has been considered by the authors in their earlier work. Matrix and set theoretic based approaches are used for specification of a DCS and for specification of the protocol requirements. These two formal specification techniques form the basis of the development of a simple and straightforward procedure for the design of the protocol. The applicability of the above design procedure has been illustrated by considering an example of a computing system encountered on board a spacecraft. A Petri-net based approach has been adopted to model the protocol. The methodology developed in this paper can be used in other DCS applications.
Resumo:
We have performed a series of magnetic aging experiments on single crystals of Dy0.5Sr0.5MnO3. The results demonstrate striking memory and chaos-like effects in this insulating half-doped perovskite manganite and suggest the existence of strong magnetic relaxation mechanisms of a clustered magnetic state. The spin-glass-like state established below a temperature T-sg approximate to 34 K originates from quenched disorder arising due to the ionic-radii mismatch at the rare earth site. However, deviations from the typical behavior seen in canonical spin glass materials are observed which indicate that the glassy magnetic properties are due to cooperative and frustrated dynamics in a heterogeneous or clustered magnetic state. In particular, the microscopic spin flip time obtained from dynamical scaling near the spin glass freezing temperature is four orders of magnitude larger than microscopic times found in atomic spin glasses. The magnetic viscosity deduced from the time dependence of the zero-field-cooled magnetization exhibits a peak at a temperature T < T-sg and displays a marked dependence on waiting time in zero field.
Resumo:
A detailed characterization of interference power statistics in CDMA systems is of considerable practical and theoretical interest. Such a characterization for uplink inter-cell interference has been difficult because of transmit power control, randomness in the number of interfering mobile stations, and randomness in their locations. We develop a new method to model the uplink inter-cell interference power as a lognormal distribution, and show that it is an order of magnitude more accurate than the conventional Gaussian approximation even when the average number of mobile stations per cell is relatively large and even outperforms the moment-matched lognormal approximation considered in the literature. The proposed method determines the lognormal parameters by matching its moment generating function with a new approximation of the moment generating function for the inter-cell interference. The method is tractable and exploits the elegant spatial Poisson process theory. Using several numerical examples, the accuracy of the proposed method in modeling the probability distribution of inter-cell interference is verified for both small and large values of interference.
Resumo:
There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.
Resumo:
In a storage system where individual storage nodes are prone to failure, the redundant storage of data in a distributed manner across multiple nodes is a must to ensure reliability. Reed-Solomon codes possess the reconstruction property under which the stored data can be recovered by connecting to any k of the n nodes in the network across which data is dispersed. This property can be shown to lead to vastly improved network reliability over simple replication schemes. Also of interest in such storage systems is the minimization of the repair bandwidth, i.e., the amount of data needed to be downloaded from the network in order to repair a single failed node. Reed-Solomon codes perform poorly here as they require the entire data to be downloaded. Regenerating codes are a new class of codes which minimize the repair bandwidth while retaining the reconstruction property. This paper provides an overview of regenerating codes including a discussion on the explicit construction of optimum codes.
Resumo:
A new language concept for high-level distributed programming is proposed. Programs are organised as a collection of concurrently executing processes. Some of these processes, referred to as liaison processes, have a monitor-like structure and contain ports which may be invoked by other processes for the purposes of synchronisation and communication. Synchronisation is achieved by conditional activation of ports and also through port control constructs which may directly specify the execution ordering of ports. These constructs implement a path-expression-like mechanism for synchronisation and are also equipped with options to provide conditional, non-deterministic and priority ordering of ports. The usefulness and expressive power of the proposed concepts are illustrated through solutions of several representative programming problems. Some implementation issues are also considered.
Resumo:
The effect of thermal cycling on the load-controlled tension-tension fatigue behavior of a Ni-Ti-Fe shape memory alloy (SMA) at room temperature was studied. Considerable strain accumulation was observed to occur in this alloy under both quasi-static and cyclic loading conditions. Though, in all cases, steady-state is reached within the first 50-100 cycles, the accumulated steady-state strain, epsilon(p.ss), is much smaller in thermally cycled alloy. As a result, the fatigue performance of them was found to be significantly enhanced vis-a-vis the as-solutionized alloy. Furthermore, under load-controlled conditions, the fatigue life of Ni-Ti-Fe alloys was found to be exclusively dependent on epsilon(p.ss). Observations made by profilometry and differential scanning calorimetry (DSC) indicate that the 200-500% enhancement in fatigue life of thermally cycled alloy is due to the homogeneous distribution of the accumulated fatigue strain. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Distributed computing systems can be modeled adequately by Petri nets. The computation of invariants of Petri nets becomes necessary for proving the properties of modeled systems. This paper presents a two-phase, bottom-up approach for invariant computation and analysis of Petri nets. In the first phase, a newly defined subnet, called the RP-subnet, with an invariant is chosen. In the second phase, the selected RP-subnet is analyzed. Our methodology is illustrated with two examples viz., the dining philosophers' problem and the connection-disconnection phase of a transport protocol. We believe that this new method, which is computationally no worse than the existing techniques, would simplify the analysis of many practical distributed systems.
Resumo:
An associative memory with parallel architecture is presented. The neurons are modelled by perceptrons having only binary, rather than continuous valued input. To store m elements each having n features, m neurons each with n connections are needed. The n features are coded as an n-bit binary vector. The weights of the n connections that store the n features of an element has only two values -1 and 1 corresponding to the absence or presence of a feature. This makes the learning very simple and straightforward. For an input corrupted by binary noise, the associative memory indicates the element that is closest (in terms of Hamming distance) to the noisy input. In the case where the noisy input is equidistant from two or more stored vectors, the associative memory indicates two or more elements simultaneously. From some simple experiments performed on the human memory and also on the associative memory, it can be concluded that the associative memory presented in this paper is in some respect more akin to a human memory than a Hopfield model.
Resumo:
NiTi thin films deposited by DC magnetron sputtering of an alloy (Ni/Ti:45/55) target at different deposition rates and substrate temperatures were analyzed for their structure and mechanical properties. The crystalline structure, phase-transformation and mechanical response were characterized by X-ray diffraction (XRD), Differential Scanning Calorimetry (DSC) and Nano-indentation techniques, respectively. The films were deposited on silicon substrates maintained at temperatures in the range 300 to 500 degrees C and post-annealed at 600 degrees C for four hours to ensure film crystallinity. Films deposited at 300 degrees C and annealed for 600 degrees C have exhibited crystalline behavior with Austenite phase as the prominent phase. Deposition onto substrates held at higher deposition temperatures (400 and 500 degrees C) resulted in the co-existence of Austenite phase along with Martensite phase. The increase in deposition rates corresponding to increase in cathode current from 250 to 350 mA has also resulted in the appearance of Martensite phase as well as improvement in crystallinity. XRD analysis revealed that the crystalline film structure is strongly influenced by process parameters such as substrate temperature and deposition rate. DSC results indicate that the film deposited at 300 degrees C had its crystallization temperature at 445 degrees C in the first thermal cycle, which is further confirmed by stress temperature response. In the second thermal cycle the Austenite and Martensite transitions were observed at 75 and 60 degrees C respectively. However, the films deposited at 500 degrees C had the Austenite and Martensite transitions at 73 and 58 degrees C, respectively. Elastic modulus and hardness values increased from 93 to 145 GPa and 7.2 to 12.6 GPa, respectively, with increase in deposition rates. These results are explained on the basis of change in film composition and crystallization. (C) 2010 Published by Elsevier Ltd
Resumo:
In a typical sensor network scenario a goal is to monitor a spatio-temporal process through a number of inexpensive sensing nodes, the key parameter being the fidelity at which the process has to be estimated at distant locations. We study such a scenario in which multiple encoders transmit their correlated data at finite rates to a distant, common decoder over a discrete time multiple access channel under various side information assumptions. In particular, we derive an achievable rate region for this communication problem.