434 resultados para LDPC decoding
Resumo:
A new generalized sphere decoding algorithm is proposed for underdetermined MIMO systems with fewer receive antennas N than transmit antennas M. The proposed algorithm is significantly faster than the existing generalized sphere decoding algorithms. The basic idea is to partition the transmitted signal vector into two subvectors x and x with N - 1 and M - N + 1 elements respectively. After some simple transformations, an outer layer Sphere Decoder (SD) can be used to choose proper x and then use an inner layer SD to decide x, thus the whole transmitted signal vector is obtained. Simulation results show that Double Layer Sphere Decoding (DLSD) has far less complexity than the existing Generalized Sphere Decoding (GSDs).
Resumo:
We determine the critical noise level for decoding low-density parity check error-correcting codes based on the magnetization enumerator (M), rather than on the weight enumerator (W) employed in the information theory literature. The interpretation of our method is appealingly simple, and the relation between the different decoding schemes such as typical pairs decoding, MAP, and finite temperature decoding (MPM) becomes clear. In addition, our analysis provides an explanation for the difference in performance between MN and Gallager codes. Our results are more optimistic than those derived using the methods of information theory and are in excellent agreement with recent results from another statistical physics approach.
Resumo:
Abstract Phonological tasks are highly predictive of reading development but their complexity obscures the underlying mechanisms driving this association. There are three key components hypothesised to drive the relationship between phonological tasks and reading; (a) the linguistic nature of the stimuli, (b) the phonological complexity of the stimuli, and (c) the production of a verbal response. We isolated the contribution of the stimulus and response components separately through the creation of latent variables to represent specially designed tasks that were matched for procedure. These tasks were administered to 570 6 to 7-year-old children along with standardised tests of regular word and non-word reading. A structural equation model, where tasks were grouped according to stimulus, revealed that the linguistic nature and the phonological complexity of the stimulus predicted unique variance in decoding, over and above matched comparison tasks without these components. An alternative model, grouped according to response mode, showed that the production of a verbal response was a unique predictor of decoding beyond matched tasks without a verbal response. In summary, we found that multiple factors contributed to reading development, supporting multivariate models over those that prioritize single factors. More broadly, we demonstrate the value of combining matched task designs with latent variable modelling to deconstruct the components of complex tasks.
Resumo:
This paper proposes the use of the 2-D differential decoding to improve the robustness of dual-polarization optical packet receivers and is demonstrated in a wavelength switching scenario for the first time.
Resumo:
This study examined a Pseudoword Phonics Curriculum to determine if this form of instruction would increase students’ decoding skills compared to typical real-word phonics instruction. In typical phonics instruction, children learn to decode familiar words which allow them to draw on their prior knowledge of how to pronounce the word and may detract from learning decoding skills. By using pseudowords during phonics instruction, students may learn more decoding skills because they are unfamiliar with the “words” and therefore cannot draw on memory for how to pronounce the word. It was hypothesized that students who learn phonics with pseudowords will learn more decoding skills and perform higher on a real-word assessment compared to students who learn phonics with real words. ^ Students from two kindergarten classes participated in this study. An author-created word decoding assessment was used to determine the students’ ability to decode words. The study was broken into three phases, each lasting one month. During Phase 1, both groups received phonics instruction using real words, which allowed for the exploration of baseline student growth trajectories and potential teacher effects. During Phase 2, the experimental group received pseudoword phonics instruction while the control group continued real-word phonics instruction. During Phase 3, both groups were taught with real-word phonics instruction. Students were assessed on their decoding skills before and after each phase. ^ Results from multiple regression and multi-level model analyses revealed a greater increase in decoding skills during the second and third phases of the study for students who received the pseudoword phonics instruction compared to students who received the real-word phonics instruction. This suggests that pseudoword phonics instruction improves decoding skills quicker than real-word phonics instruction. This also suggests that teaching decoding with pseudowords for one month can continue to improve decoding skills when children return to real-word phonics instruction. Teacher feedback suggests that confidence with reading increased for students who learned with pseudowords because they were less intimidated by the approach and viewed pseudoword phonics as a game that involved reading “silly” words. Implications of these results, limitations of this study, and areas for future research are discussed. ^
Resumo:
This study examined a Pseudoword Phonics Curriculum to determine if this form of instruction would increase students’ decoding skills compared to typical real-word phonics instruction. In typical phonics instruction, children learn to decode familiar words which allow them to draw on their prior knowledge of how to pronounce the word and may detract from learning decoding skills. By using pseudowords during phonics instruction, students may learn more decoding skills because they are unfamiliar with the “words” and therefore cannot draw on memory for how to pronounce the word. It was hypothesized that students who learn phonics with pseudowords will learn more decoding skills and perform higher on a real-word assessment compared to students who learn phonics with real words. Students from two kindergarten classes participated in this study. An author-created word decoding assessment was used to determine the students’ ability to decode words. The study was broken into three phases, each lasting one month. During Phase 1, both groups received phonics instruction using real words, which allowed for the exploration of baseline student growth trajectories and potential teacher effects. During Phase 2, the experimental group received pseudoword phonics instruction while the control group continued real-word phonics instruction. During Phase 3, both groups were taught with real-word phonics instruction. Students were assessed on their decoding skills before and after each phase. Results from multiple regression and multi-level model analyses revealed a greater increase in decoding skills during the second and third phases of the study for students who received the pseudoword phonics instruction compared to students who received the real-word phonics instruction. This suggests that pseudoword phonics instruction improves decoding skills quicker than real-word phonics instruction. This also suggests that teaching decoding with pseudowords for one month can continue to improve decoding skills when children return to real-word phonics instruction. Teacher feedback suggests that confidence with reading increased for students who learned with pseudowords because they were less intimidated by the approach and viewed pseudoword phonics as a game that involved reading “silly” words. Implications of these results, limitations of this study, and areas for future research are discussed.
Resumo:
Pattern classification of human brain activity provides unique insight into the neural underpinnings of diverse mental states. These multivariate tools have recently been used within the field of affective neuroscience to classify distributed patterns of brain activation evoked during emotion induction procedures. Here we assess whether neural models developed to discriminate among distinct emotion categories exhibit predictive validity in the absence of exteroceptive emotional stimulation. In two experiments, we show that spontaneous fluctuations in human resting-state brain activity can be decoded into categories of experience delineating unique emotional states that exhibit spatiotemporal coherence, covary with individual differences in mood and personality traits, and predict on-line, self-reported feelings. These findings validate objective, brain-based models of emotion and show how emotional states dynamically emerge from the activity of separable neural systems.
Resumo:
Recoding embraces mechanisms that augment the rules of standard genetic decoding. The deviations from standard decoding are often purposeful and their realisation provides diverse and flexible regulatory mechanisms. Recoding events such as programed ribosomal frameshifting are especially plentiful in viruses. In most organisms only a few cellular genes are known to employ programed ribosomal frameshifting in their expression. By far the most prominent and therefore well-studied case of cellular +1 frameshifting is in expression of antizyme mRNAs. The protein antizyme is a key regulator of polyamine levels in most eukaryotes with some exceptions such as plants. A +1 frameshifting event is required for the full length protein to be synthesized and this requirement is a conserved feature of antizyme mRNAs from yeast to mammals. The efficiency of the frameshifting event is dependent on the free polyamine levels in the cell. cis-acting elements in antizyme mRNAs such as specific RNA structures are required to stimulate the frameshifting efficiency. Here I describe a novel stimulator of antizyme +1 frameshifting in the Agaricomycotina class of Basidiomycete fungi. It is a nascent peptide that acts from within the ribosome exit tunnel to stimulate frameshifting efficiency in response to polyamines. The interactions of the nascent peptide with components of the peptidyl transferase centre and the protein exit tunnel emerge in our understanding as powerful means which the cell employs for monitoring and tuning the translational process. These interactions can modulate the rate of translation, protein cotranslational folding and localization. Some nascent peptides act in concert with small molecules such as polyamines or antibiotics to stall the ribosome. To these known nascent peptide effects we have added that of a stimulatory effect on the +1 frameshifting in antizyme mRNAs. It is becoming evident that nascent peptide involvement in regulation of translation is a much more general phenomenon than previously anticipated.
Resumo:
Abstract Ordnance Survey, our national mapping organisation, collects vast amounts of high-resolution aerial imagery covering the entirety of the country. Currently, photogrammetrists and surveyors use this to manually capture real-world objects and characteristics for a relatively small number of features. Arguably, the vast archive of imagery that we have obtained portraying the whole of Great Britain is highly underutilised and could be ‘mined’ for much more information. Over the last year the ImageLearn project has investigated the potential of "representation learning" to automatically extract relevant features from aerial imagery. Representation learning is a form of data-mining in which the feature-extractors are learned using machine-learning techniques, rather than being manually defined. At the beginning of the project we conjectured that representations learned could help with processes such as object detection and identification, change detection and social landscape regionalisation of Britain. This seminar will give an overview of the project and highlight some of our research results.
Resumo:
Machine learning is widely adopted to decode multi-variate neural time series, including electroencephalographic (EEG) and single-cell recordings. Recent solutions based on deep learning (DL) outperformed traditional decoders by automatically extracting relevant discriminative features from raw or minimally pre-processed signals. Convolutional Neural Networks (CNNs) have been successfully applied to EEG and are the most common DL-based EEG decoders in the state-of-the-art (SOA). However, the current research is affected by some limitations. SOA CNNs for EEG decoding usually exploit deep and heavy structures with the risk of overfitting small datasets, and architectures are often defined empirically. Furthermore, CNNs are mainly validated by designing within-subject decoders. Crucially, the automatically learned features mainly remain unexplored; conversely, interpreting these features may be of great value to use decoders also as analysis tools, highlighting neural signatures underlying the different decoded brain or behavioral states in a data-driven way. Lastly, SOA DL-based algorithms used to decode single-cell recordings rely on more complex, slower to train and less interpretable networks than CNNs, and the use of CNNs with these signals has not been investigated. This PhD research addresses the previous limitations, with reference to P300 and motor decoding from EEG, and motor decoding from single-neuron activity. CNNs were designed light, compact, and interpretable. Moreover, multiple training strategies were adopted, including transfer learning, which could reduce training times promoting the application of CNNs in practice. Furthermore, CNN-based EEG analyses were proposed to study neural features in the spatial, temporal and frequency domains, and proved to better highlight and enhance relevant neural features related to P300 and motor states than canonical EEG analyses. Remarkably, these analyses could be used, in perspective, to design novel EEG biomarkers for neurological or neurodevelopmental disorders. Lastly, CNNs were developed to decode single-neuron activity, providing a better compromise between performance and model complexity.
Resumo:
In the brain, mutations in SLC25A12 gene encoding AGC1 cause an ultra-rare genetic disease reported as a developmental and epileptic encephalopathy associated with global cerebral hypomyelination. Symptoms of the disease include diffused hypomyelination, arrested psychomotor development, severe hypotonia, seizures and are common to other neurological and developmental disorders. Amongst the biological components believed to be most affected by AGC1 deficiency are oligodendrocytes, glial cells responsible for myelination. Recent studies (Poeta et al, 2022) have also shown how altered levels of transcription factors and epigenetic modifications greatly affect proliferation and differentiation in oligodendrocyte precursor cells (OPCs). In this study we explore the transcriptomic landscape of Agc1 in two different system models: OPCs silenced for Agc1 and iPSCs from human patients differentiated to neural progenitors. Analyses range from differential expression analysis, alternative splicing, master regulator analysis. ATAC-seq results on OPCs were integrated with results from RNA-Seq to assess the activity of a TF based on the accessibility data from its putative targets, which allows to integrate RNA-Seq data to infer their role as either activators or repressors. All the findings for this model were also integrated with early data from iPSCs RNA-seq results, looking for possible commonalities between the two different system models, among which we find a downregulation in genes encoding for SREBP, a transcription factor regulating fatty acids biosynthesis, a key process for myelination which could explain the hypomyelinated state of patients. We also find that in both systems cells tend to form more neurites, likely losing their ability to differentiate, considering their progenitor state. We also report several alterations in the chromatin state of cells lacking Agc1, which confirms the hypothesis for which Agc1 is not a disease restricted only to metabolic alterations in the cells, but there is a profound shift of the regulatory state of these cells.
Resumo:
In this thesis, the viability of the Dynamic Mode Decomposition (DMD) as a technique to analyze and model complex dynamic real-world systems is presented. This method derives, directly from data, computationally efficient reduced-order models (ROMs) which can replace too onerous or unavailable high-fidelity physics-based models. Optimizations and extensions to the standard implementation of the methodology are proposed, investigating diverse case studies related to the decoding of complex flow phenomena. The flexibility of this data-driven technique allows its application to high-fidelity fluid dynamics simulations, as well as time series of real systems observations. The resulting ROMs are tested against two tasks: (i) reduction of the storage requirements of high-fidelity simulations or observations; (ii) interpolation and extrapolation of missing data. The capabilities of DMD can also be exploited to alleviate the cost of onerous studies that require many simulations, such as uncertainty quantification analysis, especially when dealing with complex high-dimensional systems. In this context, a novel approach to address parameter variability issues when modeling systems with space and time-variant response is proposed. Specifically, DMD is merged with another model-reduction technique, namely the Polynomial Chaos Expansion, for uncertainty quantification purposes. Useful guidelines for DMD deployment result from the study, together with the demonstration of its potential to ease diagnosis and scenario analysis when complex flow processes are involved.