945 resultados para Distributed algorithm
Resumo:
Low complexity decoders called Partial Interference Cancellation (PIC) and PIC with Successive Interference Cancellation (PIC-SIC), which include the Zero Forcing (ZF) and ZF-SIC receivers as special cases, were given by Guo and Xia along with sufficient conditions for a Space-Time Block Code (STBC) to achieve full diversity with PIC/PIC-SIC decoding for point-to-point MIMO channels. In Part-I of this two part series of papers, we give new conditions for an STBC to achieve full diversity with PIC and PIC-SIC decoders, which are equivalent to Guo and Xia's conditions, but are much easier to check. We then show that PIC and PIC-SIC decoders are capable of achieving the full cooperative diversity available in wireless relay networks and give sufficient conditions for a Distributed Space-Time Block Code (DSTBC) to achieve full diversity with PIC and PIC-SIC decoders. In Part-II, we construct new low complexity full-diversity PIC/PIC-SIC decodable STBCs and DSTBCs that achieve higher rates than the known full-diversity low complexity ML decodable STBCs and DSTBCs.
Resumo:
In this second part of a two part series of papers, we construct a new class of Space-Time Block Codes (STBCs) for point-to-point MIMO channel and Distributed STBCs (DSTBCs) for the amplify-and-forward relay channel that give full-diversity with Partial Interference Cancellation (PIC) and PIC with Successive Interference Cancellation (PIC-SIC) decoders. The proposed class of STBCs include most of the known full-diversity low complexity PIC/PIC-SIC decodable STBCs as special cases. We also show that a number of known full-diversity PIC/PIC-SIC decodable STBCs that were constructed for the point-topoint MIMO channel can be used as full-diversity PIC/PIC-SIC decodable DSTBCs in relay networks. For the same decoding complexity, the proposed STBCs and DSTBCs achieve higher rates than the known low decoding complexity codes. Simulation results show that the new codes have a better bit error rate performance than the low ML decoding complexity codes available in the literature.
Optimised form of acceleration correction algorithm within SPH-based simulations of impact mechanics
Resumo:
In the context of SPH-based simulations of impact dynamics, an optimised and automated form of the acceleration correction algorithm (Shaw and Reid, 2009a) is developed so as to remove spurious high frequency oscillations in computed responses whilst retaining the stabilizing characteristics of the artificial viscosity in the presence of shocks and layers with sharp gradients. A rational framework for an insightful characterisation of the erstwhile acceleration correction method is first set up. This is followed by the proposal of an optimised version of the method, wherein the strength of the correction term in the momentum balance and energy equations is optimised. For the first time, this leads to an automated procedure to arrive at the artificial viscosity term. In particular, this is achieved by taking a spatially varying response-dependent support size for the kernel function through which the correction term is computed. The optimum value of the support size is deduced by minimising the (spatially localised) total variation of the high oscillation in the acceleration term with respect to its (local) mean. The derivation of the method, its advantages over the heuristic method and issues related to its numerical implementation are discussed in detail. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents the image reconstruction using the fan-beam filtered backprojection (FBP) algorithm with no backprojection weight from windowed linear prediction (WLP) completed truncated projection data. The image reconstruction from truncated projections aims to reconstruct the object accurately from the available limited projection data. Due to the incomplete projection data, the reconstructed image contains truncation artifacts which extends into the region of interest (ROI) making the reconstructed image unsuitable for further use. Data completion techniques have been shown to be effective in such situations. We use windowed linear prediction technique for projection completion and then use the fan-beam FBP algorithm with no backprojection weight for the 2-D image reconstruction. We evaluate the quality of the reconstructed image using fan-beam FBP algorithm with no backprojection weight after WLP completion.
Resumo:
Flexible Manufacturing Systems (FMS), widely considered as the manufacturing technology of the future, are gaining increasing importance due to the immense advantages they provide in terms of cost, quality and productivity over the conventional manufacturing. An FMS is a complex interconnection of capital intensive resources and high levels of system performance is very crucial for survival in a competing environment.Discrete event simulation is one of the most popular methods for performance evaluation of FMS during planning, design and operation phases. Indeed fast simulators are suggested for selection of optimal strategies for flow control (which part type to enter and at what instant), AGV scheduling (which vehicle to carry which part), routing (which machine to process the part) and part selection (which part for processing next). In this paper we develop a C-net based model for an FMS and use the same for distributed discrete event simulation. We illustrate using examples the efficacy of destributed discrete event simulation for the performance evaluation of FMSs.
Resumo:
Simple algorithms have been developed to generate pairs of minterms forming a given 2-sum and thereby to test 2-asummability of switching functions. The 2-asummability testing procedure can be easily implemented on the computer. Since 2-asummability is a necessary and sufficient condition for a switching function of upto eight variables to be linearly separable (LS), it can be used for testing LS switching functions of upto eight variables.
Resumo:
Spectral efficiency is a key characteristic of cellular communications systems, as it quantifies how well the scarce spectrum resource is utilized. It is influenced by the scheduling algorithm as well as the signal and interference statistics, which, in turn, depend on the propagation characteristics. In this paper we derive analytical expressions for the short-term and long-term channel-averaged spectral efficiencies of the round robin, greedy Max-SINR, and proportional fair schedulers, which are popular and cover a wide range of system performance and fairness trade-offs. A unified spectral efficiency analysis is developed to highlight the differences among these schedulers. The analysis is different from previous work in the literature in the following aspects: (i) it does not assume the co-channel interferers to be identically distributed, as is typical in realistic cellular layouts, (ii) it avoids the loose spectral efficiency bounds used in the literature, which only considered the worst case and best case locations of identical co-channel interferers, (iii) it explicitly includes the effect of multi-tier interferers in the cellular layout and uses a more accurate model for handling the total co-channel interference, and (iv) it captures the impact of using small modulation constellation sizes, which are typical of cellular standards. The analytical results are verified using extensive Monte Carlo simulations.
Resumo:
The setting considered in this paper is one of distributed function computation. More specifically, there is a collection of N sources possessing correlated information and a destination that would like to acquire a specific linear combination of the N sources. We address both the case when the common alphabet of the sources is a finite field and the case when it is a finite, commutative principal ideal ring with identity. The goal is to minimize the total amount of information needed to be transmitted by the N sources while enabling reliable recovery at the destination of the linear combination sought. One means of achieving this goal is for each of the sources to compress all the information it possesses and transmit this to the receiver. The Slepian-Wolf theorem of information theory governs the minimum rate at which each source must transmit while enabling all data to be reliably recovered at the receiver. However, recovering all the data at the destination is often wasteful of resources since the destination is only interested in computing a specific linear combination. An alternative explored here is one in which each source is compressed using a common linear mapping and then transmitted to the destination which then proceeds to use linearity to directly recover the needed linear combination. The article is part review and presents in part, new results. The portion of the paper that deals with finite fields is previously known material, while that dealing with rings is mostly new.Attempting to find the best linear map that will enable function computation forces us to consider the linear compression of source. While in the finite field case, it is known that a source can be linearly compressed down to its entropy, it turns out that the same does not hold in the case of rings. An explanation for this curious interplay between algebra and information theory is also provided in this paper.
Resumo:
We consider the problem of computing a minimum cycle basis in a directed graph G. The input to this problem is a directed graph whose arcs have positive weights. In this problem a {- 1, 0, 1} incidence vector is associated with each cycle and the vector space over Q generated by these vectors is the cycle space of G. A set of cycles is called a cycle basis of G if it forms a basis for its cycle space. A cycle basis where the sum of weights of the cycles is minimum is called a minimum cycle basis of G. The current fastest algorithm for computing a minimum cycle basis in a directed graph with m arcs and n vertices runs in O(m(w+1)n) time (where w < 2.376 is the exponent of matrix multiplication). If one allows randomization, then an (O) over tilde (m(3)n) algorithm is known for this problem. In this paper we present a simple (O) over tilde (m(2)n) randomized algorithm for this problem. The problem of computing a minimum cycle basis in an undirected graph has been well-studied. In this problem a {0, 1} incidence vector is associated with each cycle and the vector space over F-2 generated by these vectors is the cycle space of the graph. The fastest known algorithm for computing a minimum cycle basis in an undirected graph runs in O(m(2)n + mn(2) logn) time and our randomized algorithm for directed graphs almost matches this running time.
Resumo:
Genetic Algorithms (GAs) are recognized as an alternative class of computational model, which mimic natural evolution to solve problems in a wide domain including machine learning, music generation, genetic synthesis etc. In the present study Genetic Algorithm has been employed to obtain damage assessment of composite structural elements. It is considered that a state of damage can be modeled as reduction in stiffness. The task is to determine the magnitude and location of damage. In a composite plate that is discretized into a set of finite elements, if a jth element is damaged, the GA based technique will predict the reduction in Ex and Ey and the location j. The fact that the natural frequency decreases with decrease in stiffness is made use of in the method. The natural frequency of any two modes of the damaged plates for the assumed damage parameters is facilitated by the use of Eigen sensitivity analysis. The Eigen value sensitivities are the derivatives of the Eigen values with respect to certain design parameters. If ωiu is the natural frequency of the ith mode of the undamaged plate and ωid is that of the damaged plate, with δωi as the difference between the two, while δωk is a similar difference in the kth mode, R is defined as the ratio of the two. For a random selection of Ex,Ey and j, a ratio Ri is obtained. A proper combination of Ex,Ey and j which makes Ri−R=0 is obtained by Genetic Algorithm.
Resumo:
Stirred tank bioreactors, employed in the production of a variety of biologically active chemicals, are often operated in batch, fed-batch, and continuous modes of operation. The optimal design of bioreactor is dependent on the kinetics of the biological process, as well as the performance criteria (yield, productivity, etc.) under consideration. In this paper, a general framework is proposed for addressing the two key issues related to the optimal design of a bioreactor, namely, (i) choice of the best operating mode and (ii) the corresponding flow rate trajectories. The optimal bioreactor design problem is formulated with initial conditions and inlet and outlet flow rate trajectories as decision variables to maximize more than one performance criteria (yield, productivity, etc.) as objective functions. A computational methodology based on genetic algorithm approach is developed to solve this challenging multiobjective optimization problem with multiple decision variables. The applicability of the algorithm is illustrated by solving two challenging problems from the bioreactor optimization literature.