434 resultados para LDPC decoding


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We design powerful low-density parity-check (LDPC) codes with iterative decoding for the block-fading channel. We first study the case of maximum-likelihood decoding, and show that the design criterion is rather straightforward. Since optimal constructions for maximum-likelihood decoding do not performwell under iterative decoding, we introduce a new family of full-diversity LDPC codes that exhibit near-outage-limit performance under iterative decoding for all block-lengths. This family competes favorably with multiplexed parallel turbo codes for nonergodic channels.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper derives approximations allowing the estimation of outage probability for standard irregular LDPC codes and full-diversity Root-LDPC codes used over nonergodic block-fading channels. Two separate approaches are discussed: a numerical approximation, obtained by curve fitting, for both code ensembles, and an analytical approximation for Root-LDPC codes, obtained under the assumption that the slope of the iterative threshold curve of a given code ensemble matches the slope of the outage capacity curve in the high-SNR regime.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents our investigation on iterativedecoding performances of some sparse-graph codes on block-fading Rayleigh channels. The considered code ensembles are standard LDPC codes and Root-LDPC codes, first proposed in and shown to be able to attain the full transmission diversity. We study the iterative threshold performance of those codes as a function of fading gains of the transmission channel and propose a numerical approximation of the iterative threshold versus fading gains, both both LDPC and Root-LDPC codes.Also, we show analytically that, in the case of 2 fading blocks,the iterative threshold root of Root-LDPC codes is proportional to (α1 α2)1, where α1 and α2 are corresponding fading gains.From this result, the full diversity property of Root-LDPC codes immediately follows.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neuroimaging studies typically compare experimental conditions using average brain responses, thereby overlooking the stimulus-related information conveyed by distributed spatio-temporal patterns of single-trial responses. Here, we take advantage of this rich information at a single-trial level to decode stimulus-related signals in two event-related potential (ERP) studies. Our method models the statistical distribution of the voltage topographies with a Gaussian Mixture Model (GMM), which reduces the dataset to a number of representative voltage topographies. The degree of presence of these topographies across trials at specific latencies is then used to classify experimental conditions. We tested the algorithm using a cross-validation procedure in two independent EEG datasets. In the first ERP study, we classified left- versus right-hemifield checkerboard stimuli for upper and lower visual hemifields. In a second ERP study, when functional differences cannot be assumed, we classified initial versus repeated presentations of visual objects. With minimal a priori information, the GMM model provides neurophysiologically interpretable features - vis à vis voltage topographies - as well as dynamic information about brain function. This method can in principle be applied to any ERP dataset testing the functional relevance of specific time periods for stimulus processing, the predictability of subject's behavior and cognitive states, and the discrimination between healthy and clinical populations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Analyses of brain responses to external stimuli are typically based on the means computed across conditions. However in many cognitive and clinical applications, taking into account their variability across trials has turned out to be statistically more sensitive than comparing their means. NEW METHOD: In this study we present a novel implementation of a single-trial topographic analysis (STTA) for discriminating auditory evoked potentials at predefined time-windows. This analysis has been previously introduced for extracting spatio-temporal features at the level of the whole neural response. Adapting the STTA on specific time windows is an essential step for comparing its performance to other time-window based algorithms. RESULTS: We analyzed responses to standard vs. deviant sounds and showed that the new implementation of the STTA gives above-chance decoding results in all subjects (in comparison to 7 out of 11 with the original method). In comatose patients, the improvement of the decoding performance was even more pronounced than in healthy controls and doubled the number of significant results. COMPARISON WITH EXISTING METHOD(S): We compared the results obtained with the new STTA to those based on a logistic regression in healthy controls and patients. We showed that the first of these two comparisons provided a better performance of the logistic regression; however only the new STTA provided significant results in comatose patients at group level. CONCLUSIONS: Our results provide quantitative evidence that a systematic investigation of the accuracy of established methods in normal and clinical population is an essential step for optimizing decoding performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Recent neuroimaging studies suggest that value-based decision-making may rely on mechanisms of evidence accumulation. However no studies have explicitly investigated the time when single decisions are taken based on such an accumulation process. NEW METHOD: Here, we outline a novel electroencephalography (EEG) decoding technique which is based on accumulating the probability of appearance of prototypical voltage topographies and can be used for predicting subjects' decisions. We use this approach for studying the time-course of single decisions, during a task where subjects were asked to compare reward vs. loss points for accepting or rejecting offers. RESULTS: We show that based on this new method, we can accurately decode decisions for the majority of the subjects. The typical time-period for accurate decoding was modulated by task difficulty on a trial-by-trial basis. Typical latencies of when decisions are made were detected at ∼500ms for 'easy' vs. ∼700ms for 'hard' decisions, well before subjects' response (∼340ms). Importantly, this decision time correlated with the drift rates of a diffusion model, evaluated independently at the behavioral level. COMPARISON WITH EXISTING METHOD(S): We compare the performance of our algorithm with logistic regression and support vector machine and show that we obtain significant results for a higher number of subjects than with these two approaches. We also carry out analyses at the average event-related potential level, for comparison with previous studies on decision-making. CONCLUSIONS: We present a novel approach for studying the timing of value-based decision-making, by accumulating patterns of topographic EEG activity at single-trial level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Directional cell growth requires that cells read and interpret shallow chemical gradients, but how the gradient directional information is identified remains elusive. We use single-cell analysis and mathematical modeling to define the cellular gradient decoding network in yeast. Our results demonstrate that the spatial information of the gradient signal is read locally within the polarity site complex using double-positive feedback between the GTPase Cdc42 and trafficking of the receptor Ste2. Spatial decoding critically depends on low Cdc42 activity, which is maintained by the MAPK Fus3 through sequestration of the Cdc42 activator Cdc24. Deregulated Cdc42 or Ste2 trafficking prevents gradient decoding and leads to mis-oriented growth. Our work discovers how a conserved set of components assembles a network integrating signal intensity and directionality to decode the spatial information contained in chemical gradients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In wireless communications the transmitted signals may be affected by noise. The receiver must decode the received message, which can be mathematically modelled as a search for the closest lattice point to a given vector. This problem is known to be NP-hard in general, but for communications applications there exist algorithms that, for a certain range of system parameters, offer polynomial expected complexity. The purpose of the thesis is to study the sphere decoding algorithm introduced in the article On Maximum-Likelihood Detection and the Search for the Closest Lattice Point, which was published by M.O. Damen, H. El Gamal and G. Caire in 2003. We concentrate especially on its computational complexity when used in space–time coding. Computer simulations are used to study how different system parameters affect the computational complexity of the algorithm. The aim is to find ways to improve the algorithm from the complexity point of view. The main contribution of the thesis is the construction of two new modifications to the sphere decoding algorithm, which are shown to perform faster than the original algorithm within a range of system parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bioinformatics applies computers to problems in molecular biology. Previous research has not addressed edit metric decoders. Decoders for quaternary edit metric codes are finding use in bioinformatics problems with applications to DNA. By using side effect machines we hope to be able to provide efficient decoding algorithms for this open problem. Two ideas for decoding algorithms are presented and examined. Both decoders use Side Effect Machines(SEMs) which are generalizations of finite state automata. Single Classifier Machines(SCMs) use a single side effect machine to classify all words within a code. Locking Side Effect Machines(LSEMs) use multiple side effect machines to create a tree structure of subclassification. The goal is to examine these techniques and provide new decoders for existing codes. Presented are ideas for best practices for the creation of these two types of new edit metric decoders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modeling nonlinear systems using Volterra series is a century old method but practical realizations were hampered by inadequate hardware to handle the increased computational complexity stemming from its use. But interest is renewed recently, in designing and implementing filters which can model much of the polynomial nonlinearities inherent in practical systems. The key advantage in resorting to Volterra power series for this purpose is that nonlinear filters so designed can be made to work in parallel with the existing LTI systems, yielding improved performance. This paper describes the inclusion of a quadratic predictor (with nonlinearity order 2) with a linear predictor in an analog source coding system. Analog coding schemes generally ignore the source generation mechanisms but focuses on high fidelity reconstruction at the receiver. The widely used method of differential pnlse code modulation (DPCM) for speech transmission uses a linear predictor to estimate the next possible value of the input speech signal. But this linear system do not account for the inherent nonlinearities in speech signals arising out of multiple reflections in the vocal tract. So a quadratic predictor is designed and implemented in parallel with the linear predictor to yield improved mean square error performance. The augmented speech coder is tested on speech signals transmitted over an additive white gaussian noise (AWGN) channel.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The article attempts to explain the main paradox faced by Canada at formulating its foreign policy on international security. Explained in economic and political terms, this paradox consists in the contradiction between the Canadian ability to achieve its strategic goals, serving to its own national interest and its dependence on the United States. The first section outlines three representative examples to evaluate this paradox: the Canada’s position in North American security regime, the US-Canada economic security relations, and the universe of possibilities for action of Canada as a middle power. The second section suggests that liberal agenda, especially concerning to ethical issues, has been established by this country to minimize this paradox. By pursing this agenda, Canada is able to reaffirm its national identity and therefore its independence on the United States. The third section evaluates both the explained paradox and the reaffirmation of Canadian identity during the Jean Chrétien (1993-2003), Paul Martin (2003-2006) and Stephen Harper’s (2006) governments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Parkinson's disease patients may have difficulty decoding prosodic emotion cues. These data suggest that the basal ganglia are involved, but may reflect dorsolateral prefrontal cortex dysfunction. An auditory emotional n-back task and cognitive n-back task were administered to 33 patients and 33 older adult controls, as were an auditory emotional Stroop task and cognitive Stroop task. No deficit was observed on the emotion decoding tasks; this did not alter with increased frontal lobe load. However, on the cognitive tasks, patients performed worse than older adult controls, suggesting that cognitive deficits may be more prominent. The impact of frontal lobe dysfunction on prosodic emotion cue decoding may only become apparent once frontal lobe pathology rises above a threshold.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BCH codes over arbitrary finite commutative rings with identity are derived in terms of their locator vector. The derivation is based on the factorization of xs -1 over the unit ring of an appropriate extension of the finite ring. We present an efficient decoding procedure, based on the modified Berlekamp-Massey algorithm, for these codes. The code construction and the decoding procedures are very similar to the BCH codes over finite integer rings. © 1999 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present a new construction and decoding of BCH codes over certain rings. Thus, for a nonnegative integer t, let A0 ⊂ A1 ⊂···⊂ At−1 ⊂ At be a chain of unitary commutative rings, where each Ai is constructed by the direct product of appropriate Galois rings, and its projection to the fields is K0 ⊂ K1 ⊂···⊂ Kt−1 ⊂ Kt (another chain of unitary commutative rings), where each Ki is made by the direct product of corresponding residue fields of given Galois rings. Also, A∗ i and K∗ i are the groups of units of Ai and Ki, respectively. This correspondence presents a construction technique of generator polynomials of the sequence of Bose, Chaudhuri, and Hocquenghem (BCH) codes possessing entries from A∗ i and K∗ i for each i, where 0 ≤ i ≤ t. By the construction of BCH codes, we are confined to get the best code rate and error correction capability; however, the proposed contribution offers a choice to opt a worthy BCH code concerning code rate and error correction capability. In the second phase, we extend the modified Berlekamp-Massey algorithm for the above chains of unitary commutative local rings in such a way that the error will be corrected of the sequences of codewords from the sequences of BCH codes at once. This process is not much different than the original one, but it deals a sequence of codewords from the sequence of codes over the chain of Galois rings.