632 resultados para Decoding


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hardware designers and engineers typically need to explore a multi-parametric design space in order to find the best configuration for their designs using simulations that can take weeks to months to complete. For example, designers of special purpose chips need to explore parameters such as the optimal bitwidth and data representation. This is the case for the development of complex algorithms such as Low-Density Parity-Check (LDPC) decoders used in modern communication systems. Currently, high-performance computing offers a wide set of acceleration options, that range from multicore CPUs to graphics processing units (GPUs) and FPGAs. Depending on the simulation requirements, the ideal architecture to use can vary. In this paper we propose a new design flow based on OpenCL, a unified multiplatform programming model, which accelerates LDPC decoding simulations, thereby significantly reducing architectural exploration and design time. OpenCL-based parallel kernels are used without modifications or code tuning on multicore CPUs, GPUs and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL for mapping the simulations into FPGAs. To the best of our knowledge, this is the first time that a single, unmodified OpenCL code is used to target those three different platforms. We show that, depending on the design parameters to be explored in the simulation, on the dimension and phase of the design, the GPU or the FPGA may suit different purposes more conveniently, providing different acceleration factors. For example, although simulations can typically execute more than 3x faster on FPGAs than on GPUs, the overhead of circuit synthesis often outweighs the benefits of FPGA-accelerated execution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Qualified teaching staffs are neither available nor affordable to provide large numbers of children with individual attention. One solution to providing individual tuition has been the development of tutoring programs that are delivered by nonprofessional tutors, such as classmates, older children and community volunteers. Objectives: We have conducted a systematic review of cross-age tutoring interventions delivered by non-professional tutors to children between 5 and 11 years old. Only randomized controlled trials with reliable measures of academic outcomes, and continuing for at least 12 weeks, compared to instruction as usual, were included. Results: Searches of electronic databases and previous reviews, and contacts with researchers yielded 11,564 titles; after screening, 15 studies were included in the analysis. Cross-age tutoring showed small significant effects for tutees on the composite measure of reading (g=0.18, 95% CI: 0.08, 0.27, N=8251), decoding skills (g=0.29, 95% CI: 0.13, 0.44, N=7081), and reading comprehension (g=0.11, 95% CI: 0.01, 0.21, N=6945). No significant effects were detected for other reading sub-skills or for mathematics. The quality of evidence is decreased by study limitations and high heterogeneity of effects. Conclusions: The benefits for tutees of non-professional peer and cross-age tutoring can be given a positive but weak recommendation, considering the low quality of evidence and lack of cost information. Subgroup analyses suggested that highly-structured reading programs may be more useful than loosely-structured programs. Large-scale replication trials using factorial design, process evaluations, reliable outcome measures and logic models are needed to better understand under what conditions, and for whom, cross-age non-professional tutoring may be effective.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This systematic review summarizes effects of peer tutoring delivered to children between 5 and 11 years old by non-professional tutors, such as classmates, older children and adult community peer volunteers. Inclusion criteria for the review included tutoring studies with a randomized controlled trial design, reliable measures of academic outcomes, and duration of at least 12 weeks. Searches of electronic databases, previous reviews, and contacts with researchers yielded 11,564 titles. After screening, 15 studies were included in the analysis. Cross-age tutoring showed small significant effects for tutees on the composite measure of reading (g = 0.18, 95% CI: 0.08, 0.27, N = 8251), decoding skills (g = 0.29, 95% CI: 0.13, 0.44, N = 7081), and reading comprehension (g = 0.11, 95% CI: 0.01, 0.21, N = 6945). No significant effects were detected for other reading sub-skills or for mathematics. The benefits to tutees of non-professional cross-age peer tutoring can be given a positive, but weak recommendation. Effect Sizes were modest and in the range −0.02 to 0.29. Questions regarding study limitations, lack of cost information, heterogeneity of effects, and the relatively small number of studies that have used a randomized controlled trial design means that the evidence base is not as strong as it could be. Subgroup analyses of included studies indicated that highly-structured reading programmes were of more benefit than those that were loosely-structured. Large-scale replication trials using factorial designs, reliable outcome measures, process evaluations and logic models are needed to better understand under what conditions, and for whom, cross-age non-professional peer tutoring may be most effective.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates how spatial practices of Public art performance had transformed public space from being a congested traffic hub into an active and animated space for resistance that was equally accessible to different factions, social strata, media outlets and urban society, determined by popular culture and social responsibility. Tahrir Square was reproduced, in a process of “space adaptation” using Henri Lefebvre’s term, to accommodate forms of social organization and administration.205 Among the spatial patterns of activities detected and analyzed this paper focus on particular forms of mass practices of art and freedom of expression that succeeded to transform Tahrir square into performative space and commemorate its spatial events. It attempts to interrogate how the power of artistic interventions has recalled socio-cultural memory through spatial forms that have negotiated middle grounds between deeply segregated political and social groups in moments of utopian democracy. Through analytical surveys and decoding of media recordings of the events, direct interviews with involved actors and witnesses, this paper offers insight into the ways protesters lent their artistry capacity to the performance of resistance to become an act of spatial festivity or commemoration of events. The paper presents series of analytical maps tracing how the role of art has shifted significantly from traditional freedom of expression modes as narrative of resistance into more sophisticated spatial performative ones that take on a new spatial vibrancy and purpose.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The design cycle for complex special-purpose computing systems is extremely costly and time-consuming. It involves a multiparametric design space exploration for optimization, followed by design verification. Designers of special purpose VLSI implementations often need to explore parameters, such as optimal bitwidth and data representation, through time-consuming Monte Carlo simulations. A prominent example of this simulation-based exploration process is the design of decoders for error correcting systems, such as the Low-Density Parity-Check (LDPC) codes adopted by modern communication standards, which involves thousands of Monte Carlo runs for each design point. Currently, high-performance computing offers a wide set of acceleration options that range from multicore CPUs to Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The exploitation of diverse target architectures is typically associated with developing multiple code versions, often using distinct programming paradigms. In this context, we evaluate the concept of retargeting a single OpenCL program to multiple platforms, thereby significantly reducing design time. A single OpenCL-based parallel kernel is used without modifications or code tuning on multicore CPUs, GPUs, and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL in order to introduce FPGAs as a potential platform to efficiently execute simulations coded in OpenCL. We use LDPC decoding simulations as a case study. Experimental results were obtained by testing a variety of regular and irregular LDPC codes that range from short/medium (e.g., 8,000 bit) to long length (e.g., 64,800 bit) DVB-S2 codes. We observe that, depending on the design parameters to be simulated, on the dimension and phase of the design, the GPU or FPGA may suit different purposes more conveniently, thus providing different acceleration factors over conventional multicore CPUs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we investigate the impact of faulty memory bit-cells on the performance of LDPC and Turbo channel decoders based on realistic memory failure models. Our study investigates the inherent error resilience of such codes to potential memory faults affecting the decoding process. We develop two mitigation mechanisms that reduce the impact of memory faults rather than correcting every single error. We show how protection of only few bit-cells is sufficient to deal with high defect rates. In addition, we show how the use of repair-iterations specifically helps mitigating the impact of faults that occur inside the decoder itself.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A multiuser dual-hop relaying system over mixed radio frequency/free-space optical (RF/FSO) links is investigated. Specifically, the system consists of m single-antenna sources, a relay node equipped with n≥ m receive antennas and a single photo-aperture transmitter, and one destination equipped with a single photo-detector. RF links are used for the simultaneous data transmission from multiple sources to the relay. The relay operates under the decode-and-forward protocol and utilizes the popular V-BLAST technique by successively decoding each user's transmitted stream. Two common norm-based orderings are adopted, i.e., the streams are decoded in an ascending or a descending order. After V-BLAST, the relay retransmits the decoded information to the destination via a point-to-point FSO link in m consecutive timeslots. Analytical expressions for the end-to-end outage probability and average symbol error probability of each user are derived, while closed-form asymptotic expressions are also presented. Capitalizing on the derived results, some engineering insights are manifested, such as the coding and diversity gain of each user, the impact of the pointing error displacement on the FSO link and the V-BLAST ordering effectiveness at the relay.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasing scale of Multiple-Input Multiple- Output (MIMO) topologies employed in forthcoming wireless communications standards presents a substantial implementation challenge to designers of embedded baseband signal processing architectures for MIMO transceivers. Specifically the increased scale of such systems has a substantial impact on the perfor- mance/cost balance of detection algorithms for these systems. Whilst in small-scale systems Sphere Decoding (SD) algorithms offer the best quasi-ML performance/cost balance, in larger systems heuristic detectors, such Tabu-Search (TS) detectors are superior. This paper addresses a dearth of research in architectures for TS-based MIMO detection, presenting the first known realisations of TS detectors for 4 × 4 and 10 × 10 MIMO systems. To the best of the authors’ knowledge, these are the largest single-chip detectors on record.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Um dos maiores avanços científicos do século XX foi o desenvolvimento de tecnologia que permite a sequenciação de genomas em larga escala. Contudo, a informação produzida pela sequenciação não explica por si só a sua estrutura primária, evolução e seu funcionamento. Para esse fim novas áreas como a biologia molecular, a genética e a bioinformática são usadas para estudar as diversas propriedades e funcionamento dos genomas. Com este trabalho estamos particularmente interessados em perceber detalhadamente a descodificação do genoma efectuada no ribossoma e extrair as regras gerais através da análise da estrutura primária do genoma, nomeadamente o contexto de codões e a distribuição dos codões. Estas regras estão pouco estudadas e entendidas, não se sabendo se poderão ser obtidas através de estatística e ferramentas bioinfomáticas. Os métodos tradicionais para estudar a distribuição dos codões no genoma e seu contexto não providenciam as ferramentas necessárias para estudar estas propriedades à escala genómica. As tabelas de contagens com as distribuições de codões, assim como métricas absolutas, estão actualmente disponíveis em bases de dados. Diversas aplicações para caracterizar as sequências genéticas estão também disponíveis. No entanto, outros tipos de abordagens a nível estatístico e outros métodos de visualização de informação estavam claramente em falta. No presente trabalho foram desenvolvidos métodos matemáticos e computacionais para a análise do contexto de codões e também para identificar zonas onde as repetições de codões ocorrem. Novas formas de visualização de informação foram também desenvolvidas para permitir a interpretação da informação obtida. As ferramentas estatísticas inseridas no modelo, como o clustering, análise residual, índices de adaptação dos codões revelaram-se importantes para caracterizar as sequências codificantes de alguns genomas. O objectivo final é que a informação obtida permita identificar as regras gerais que governam o contexto de codões em qualquer genoma.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O desenvolvimento de equipamentos de descodificação massiva de genomas veio aumentar de uma forma brutal os dados disponíveis. No entanto, para desvendarmos informação relevante a partir da análise desses dados é necessário software cada vez mais específico, orientado para determinadas tarefas que auxiliem o investigador a obter conclusões o mais rápido possível. É nesse campo que a bioinformática surge, como aliado fundamental da biologia, uma vez que tira partido de métodos e infra-estruturas computacionais para desenvolver algoritmos e aplicações informáticas. Por outro lado, na maior parte das vezes, face a novas questões biológicas é necessário responder com novas soluções específicas, pelo que o desenvolvimento de aplicações se torna um desafio permanente para os engenheiros de software. Foi nesse contexto que surgiram os principais objectivos deste trabalho, centrados na análise de tripletos e de repetições em estruturas primárias de DNA. Para esse efeito, foram propostos novos métodos e novos algoritmos que permitirem o processamento e a obtenção de resultados sobre grandes volumes de dados. Ao nível da análise de tripletos de codões e de aminoácidos foi proposto um sistema concebido para duas vertentes: por um lado o processamento dos dados, por outro a disponibilização na Web dos dados processados, através de um mecanismo visual de composição de consultas. Relativamente à análise de repetições, foi proposto e desenvolvido um sistema para identificar padrões de nucleótidos e aminoácidos repetidos em sequências específicas, com particular aplicação em genes ortólogos. As soluções propostas foram posteriormente validadas através de casos de estudo que atestam a mais-valia do trabalho desenvolvido.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Várias espécies do género Candida traduzem o codão CUG de leucine como serina. Em C. albicans este codão é traduzido pelo tRNACAG Ser de serina que é reconhecido por leucil- e seril-tRNA sintetases (LeuRS e SerRS), permitindo a incorporação de leucina ou serina em posições com CUG. Em condições padrão de crescimento os codões CUG é incorporam 3% de leucina e 97% de serina, no entanto estes valores são flexíveis uma vez que a incorporação de serina pode variar entre 0.6% e 5% em resposta a condições de stress. Estudos anteriores realizados in vivo em Escherichia coli sugeriram que a ambiguidade em codões CUG é regulada pela SerRS. De facto, o gene da SerRS de C. albicans tem um codão CUG na posição 197 (Ser197) cuja descodificação ambígua resulta na produção de duas isoformas de SerRS. A isoforma SerRS_Leu197 é mais ativa, apesar de menos estável, que a isoforma SerRS_Ser197, suportando a ideia da existência de um feedback loop negativo, envolvendo estas duas isoformas de SerRS, a enzima LeuRS e o tRNACAG Ser, que mantem os níveis de incorporação de leucina no codões CUG baixos. Nesta tese demonstramos que tal mecanismo não é operacional nas células de C. albicans. De facto, os níveis de incorporação de leucina em codões CUG flutuam drasticamente em resposta a alterações ambientais. Por exemplo, a incorporação de leucina pode chegar a níveis de 49.33% na presença de macrófagos e anfotericina B, mostrando a notória tolerância de C. albicans à ambiguidade. Para compreender a relevância biológica da ambiguidade do código genético em C. albicans construímos estirpes que incorporam serina em vários codões. Apesar da taxa crescimento ter sido negativamente afetada em condições padrão de crescimento, as estirpes construídas crescem favoravelmente em várias condições de stresse, sugerindo que a ambiguidade desempenha um papel importante na adaptação a novos nichos ecológicos. O transcriptoma das estirpes construídas de C. albicans e Saccharomyces. cerevisiae mostram que as leveduras respondem à ambiguidade dos codões de modo distinto. A ambiguidade induziu uma desregulação moderada da expressão génica de C. albicans, mas ativou uma resposta comum ao stresse em S. cerevisiae. O único processo celular que foi induzido na maioria das estirpes foi a oxidação redução. De salientar, que enriquecimento em elementos cis de fatores de transcrição que regulam a resposta à ambiguidade em ambas as leveduras foi distinta, sugerindo que ambas respondem ao stresse de modo diferente. Na globalidade, o nosso estudo aprofunda o conhecimento da elevada tolerância à ambiguidade de codões em C. albicans. Os resultados sugerem que este fungo usa a ambiguidade do codão CUG durante infeção, possivelmente para modular a sua interação com o hospedeiro e a resposta a drogas antifúngicas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Candida albicans is the major fungal pathogen in humans, causing diseases ranging from mild skin infections to severe systemic infections in immunocompromised individuals. The pathogenic nature of this organism is mostly due to its capacity to proliferate in numerous body sites and to its ability to adapt to drastic changes in the environment. Candida albicans exhibit a unique translational system, decoding the leucine-CUG codon ambiguously as leucine (3% of codons) and serine (97%) using a hybrid serine tRNA (tRNACAGSer). This tRNACAGSer is aminoacylated by two aminoacyl tRNA synthetases (aaRSs): leucyl-tRNA synthetase (LeuRS) and seryl-tRNA synthetase (SerRS). Previous studies showed that exposure of C. albicans to macrophages, oxidative, pH stress and antifungals increases Leu misincorporation levels from 3% to 15%, suggesting that C. albicans has the ability to regulate mistranslation levels in response to host defenses, antifungals and environmental stresses. Therefore, the hypothesis tested in this work is that Leu and Ser misincorporation at CUG codons is dependent upon competition between the LeuRS and SerRS for the tRNACAGSer. To test this hypothesis, levels of the SerRS and LeuRS were indirectly quantified under different physiological conditions, using a fluorescent reporter system that measures the activity of the respective promoters. Results suggest that an increase in Leu misincorporation at CUG codons is associated with an increase in LeuRS expression, with levels of SerRS being maintained. In the second part of the work, the objective was to identify putative regulators of SerRS and LeuRS expression. To accomplish this goal, C. albicans strains from a transcription factor knock-out collection were transformed with the fluorescent reporter system and expression of both aaRSs was quantified. Alterations in the LeuRS/SerRS expression of mutant strains compared to wild type strain allowed the identification of 5 transcription factors as possible regulators of expression of LeuRS and SerRS: ASH1, HAP2, HAP3, RTG3 and STB5. Globally, this work provides the first step to elucidate the molecular mechanism of regulation of mistranslation in C. albicans.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes an MPEG (moving pictures expert group) audio layer II - LFE (lower frequency extension) bit-stream processor targeting DAB (digital audio broadcasting) receivers that will handle the decoding of the frames in a computationally efficient manner to provide a synthesis sub-band filter with the reconstructed sub-band samples. Focus is given to the frequency sample reconstruction part, which handles the re-quantization and re-scaling of the samples once the necessary information is extracted from the frame. The comparison to a direct implementation of the frequency sample reconstruction block is carried out to prove increased computational efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A simple but effective technique to improve the performance of the Max-Log-MAP algorithm is to scale the extrinsic information exchanged between two MAP decoders. A comprehensive analysis of the selection of the scaling factors according to channel conditions and decoding iterations is presented in this paper. Choosing a constant scaling factor for all SNRs and iterations is compared with the best scaling factor selection for changing channel conditions and decoding iterations. It is observed that a constant scaling factor for all channel conditions and decoding iterations is the best solution and provides a 0.2-0.4 dB gain over the standard Max- Log-MAP algorithm. Therefore, a constant scaling factor should be chosen for the best compromise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações