5 resultados para Parallel Evolutionary Algorithms

em WestminsterResearch - UK


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The UMTS turbo encoder is composed of parallel concatenation of two Recursive Systematic Convolutional (RSC) encoders which start and end at a known state. This trellis termination directly affects the performance of turbo codes. This paper presents performance analysis of multi-point trellis termination of turbo codes which is to terminate RSC encoders at more than one point of the current frame while keeping the interleaver length the same. For long interleaver lengths, this approach provides dividing a data frame into sub-frames which can be treated as independent blocks. A novel decoding architecture using multi-point trellis termination and collision-free interleavers is presented. Collision-free interleavers are used to solve memory collision problems encountered by parallel decoding of turbo codes. The proposed parallel decoding architecture reduces the decoding delay caused by the iterative nature and forward-backward metric computations of turbo decoding algorithms. Our simulations verified that this turbo encoding and decoding scheme shows Bit Error Rate (BER) performance very close to that of the UMTS turbo coding while providing almost %50 time saving for the 2-point termination and %80 time saving for the 5-point termination.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Turbo codes experience a significant decoding delay because of the iterative nature of the decoding algorithms, the high number of metric computations and the complexity added by the (de)interleaver. The extrinsic information is exchanged sequentially between two Soft-Input Soft-Output (SISO) decoders. Instead of this sequential process, a received frame can be divided into smaller windows to be processed in parallel. In this paper, a novel parallel processing methodology is proposed based on the previous parallel decoding techniques. A novel Contention-Free (CF) interleaver is proposed as part of the decoding architecture which allows using extrinsic Log-Likelihood Ratios (LLRs) immediately as a-priori LLRs to start the second half of the iterative turbo decoding. The simulation case studies performed in this paper show that our parallel decoding method can provide %80 time saving compared to the standard decoding and %30 time saving compared to the previous parallel decoding methods at the expense of 0.3 dB Bit Error Rate (BER) performance degradation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Mobile Network Optimization (MNO) technologies have advanced at a tremendous pace in recent years. And the Dynamic Network Optimization (DNO) concept emerged years ago, aimed to continuously optimize the network in response to variations in network traffic and conditions. Yet, DNO development is still at its infancy, mainly hindered by a significant bottleneck of the lengthy optimization runtime. This paper identifies parallelism in greedy MNO algorithms and presents an advanced distributed parallel solution. The solution is designed, implemented and applied to real-life projects whose results yield a significant, highly scalable and nearly linear speedup up to 6.9 and 14.5 on distributed 8-core and 16-core systems respectively. Meanwhile, optimization outputs exhibit self-consistency and high precision compared to their sequential counterpart. This is a milestone in realizing the DNO. Further, the techniques may be applied to similar greedy optimization algorithm based applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been years since the introduction of the Dynamic Network Optimization (DNO) concept, yet the DNO development is still at its infant stage, largely due to a lack of breakthrough in minimizing the lengthy optimization runtime. Our previous work, a distributed parallel solution, has achieved a significant speed gain. To cater for the increased optimization complexity pressed by the uptake of smartphones and tablets, however, this paper examines the potential areas for further improvement and presents a novel asynchronous distributed parallel design that minimizes the inter-process communications. The new approach is implemented and applied to real-life projects whose results demonstrate an augmented acceleration of 7.5 times on a 16-core distributed system compared to 6.1 of our previous solution. Moreover, there is no degradation in the optimization outcome. This is a solid sprint towards the realization of DNO.