949 resultados para thesis coding
Resumo:
A novel high throughput and scalable unified architecture for the computation of the transform operations in video codecs for advanced standards is presented in this paper. This structure can be used as a hardware accelerator in modern embedded systems to efficiently compute all the two-dimensional 4 x 4 and 2 x 2 transforms of the H.264/AVC standard. Moreover, its highly flexible design and hardware efficiency allows it to be easily scaled in terms of performance and hardware cost to meet the specific requirements of any given video coding application. Experimental results obtained using a Xilinx Virtex-5 FPGA demonstrated the superior performance and hardware efficiency levels provided by the proposed structure, which presents a throughput per unit of area relatively higher than other similar recently published designs targeting the H.264/AVC standard. Such results also showed that, when integrated in a multi-core embedded system, this architecture provides speedup factors of about 120x concerning pure software implementations of the transform algorithms, therefore allowing the computation, in real-time, of all the above mentioned transforms for Ultra High Definition Video (UHDV) sequences (4,320 x 7,680 @ 30 fps).
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
In the last years it has become increasingly clear that the mammalian transcriptome is highly complex and includes a large number of small non-coding RNAs (sncRNAs) and long noncoding RNAs (lncRNAs). Here we review the biogenesis pathways of the three classes of sncRNAs, namely short interfering RNAs (siRNAs), microRNAs (miRNAs) and PIWI-interacting RNAs (piRNAs). These ncRNAs have been extensively studied and are involved in pathways leading to specific gene silencing and the protection of genomes against virus and transposons, for example. Also, lncRNAs have emerged as pivotal molecules for the transcriptional and post-transcriptional regulation of gene expression which is supported by their tissue-specific expression patterns, subcellular distribution, and developmental regulation. Therefore, we also focus our attention on their role in differentiation and development. SncRNAs and lncRNAs play critical roles in defining DNA methylation patterns, as well as chromatin remodeling thus having a substantial effect in epigenetics. The identification of some overlaps in their biogenesis pathways and functional roles raises the hypothesis that these molecules play concerted functions in vivo, creating complex regulatory networks where cooperation with regulatory proteins is necessary. We also highlighted the implications of biogenesis and gene expression deregulation of sncRNAs and lncRNAs in human diseases like cancer.
Resumo:
In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.
Resumo:
Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.
Resumo:
A 9.9 kb DNA fragment from the right arm of chromosome VII of Saccharomyces cerevisiae has been sequenced and analysed. The sequence contains four open reading frames (ORFs) longer than 100 amino acids. One gene, PFK1, has already been cloned and sequenced and the other one is the probable yeast gene coding for the beta-subunit of the succinyl-CoA synthetase. The two remaining ORFs share homology with the deduced amino acid sequence (and their physical arrangement is similar to that) of the YHR161c and YHR162w ORFs from chromosome VIII.
Resumo:
In visual sensor networks, local feature descriptors can be computed at the sensing nodes, which work collaboratively on the data obtained to make an efficient visual analysis. In fact, with a minimal amount of computational effort, the detection and extraction of local features, such as binary descriptors, can provide a reliable and compact image representation. In this paper, it is proposed to extract and code binary descriptors to meet the energy and bandwidth constraints at each sensing node. The major contribution is a binary descriptor coding technique that exploits the correlation using two different coding modes: Intra, which exploits the correlation between the elements that compose a descriptor; and Inter, which exploits the correlation between descriptors of the same image. The experimental results show bitrate savings up to 35% without any impact in the performance efficiency of the image retrieval task. © 2014 EURASIP.
Resumo:
As high dynamic range video is gaining popularity, video coding solutions able to efficiently provide both low and high dynamic range video, notably with a single bitstream, are increasingly important. While simulcasting can provide both dynamic range videos at the cost of some compression efficiency penalty, bit-depth scalable video coding can provide a better trade-off between compression efficiency, adaptation flexibility and computational complexity. Considering the widespread use of H.264/AVC video, this paper proposes a H.264/AVC backward compatible bit-depth scalable video coding solution offering a low dynamic range base layer and two high dynamic range enhancement layers with different qualities, at low complexity. Experimental results show that the proposed solution has an acceptable rate-distortion performance penalty regarding the HDR H.264/AVC single-layer coding solution.
Resumo:
In video communication systems, the video signals are typically compressed and sent to the decoder through an error-prone transmission channel that may corrupt the compressed signal, causing the degradation of the final decoded video quality. In this context, it is possible to enhance the error resilience of typical predictive video coding schemes using as inspiration principles and tools from an alternative video coding approach, the so-called Distributed Video Coding (DVC), based on the Distributed Source Coding (DSC) theory. Further improvements in the decoded video quality after error-prone transmission may also be obtained by considering the perceptual relevance of the video content, as distortions occurring in different regions of a picture have a different impact on the user's final experience. In this context, this paper proposes a Perceptually Driven Error Protection (PDEP) video coding solution that enhances the error resilience of a state-of-the-art H.264/AVC predictive video codec using DSC principles and perceptual considerations. To increase the H.264/AVC error resilience performance, the main technical novelties brought by the proposed video coding solution are: (i) design of an improved compressed domain perceptual classification mechanism; (ii) design of an improved transcoding tool for the DSC-based protection mechanism; and (iii) integration of a perceptual classification mechanism in an H.264/AVC compliant codec with a DSC-based error protection mechanism. The performance results obtained show that the proposed PDEP video codec provides a better performing alternative to traditional error protection video coding schemes, notably Forward Error Correction (FEC)-based schemes. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
O registo clinico é um documento hospitalar confidencial, nele é registado todo o percurso clinico de um determinado paciente. Esse percurso é registado através da atribuição de códigos descritos num sistema de classificação, também denominados de dados da codificação. Os dados da codificação são armazenados electronicamente numa Base de Dados, e é a partir dela que certas medidas, tais como facturação hospitalar, financiamentos hospitalares anuais, tratamentos médicos, ou até mesmo dados epidemiológicos, são adotadas. Portanto torna-se fundamental a garantia da qualidade na área da codificação clinica, para isso é necessário recorrer a processos de auditorias aos registos clínicos. O processo de auditoria é uma atividade independente de avaliação, cujo objetivo visa acrescentar valor e melhorar os objetos e processos da auditoria. Atualmente uma ferramenta denominada de Programa Auditor é utilizada, contudo essa ferramenta demonstra uma tecnologia já ultrapassada. A Tese que se pretende defender é a de que, através de Sistemas Periciais, é possível realizar auditorias internas aos registos clínicos. Pretende-se ainda demonstrar que dos Sistemas Periciais e um benefício para os peritos na área, pois diminui a probabilidade de erros e torna o processo de verificação menos moroso. Neste contexto, o objetivo desta Dissertação prende-se em definir e implementar um Sistema Pericial que permita a auditoria de registos clínicos hospitalares. O Sistema Pericial dever a ainda ser capaz de traduzir o raciocínio do perito, detectando assim diversos tipos de erros que traduzem num registo clínico não conforme. Por sua vez, foi desenvolvido um protótipo de um Sistema Pericial para auditoria de registos clínicos em linguagem PROLOG. Este mesmo protótipo serviu de base à realização de uma experiência que permitiu comparar com o programa Auditor e verificar a sua aplicabilidade no dia-a-dia hospitalar.
Resumo:
Tendo em conta a popularidade que as comunicações Wi-Fi têm na atualidade em vários dispositivos como computadores portáteis, telemóveis ou tablets, sendo estes utilizados praticamente por qualquer pessoa, surgiu a ideia de utilizar esta tecnologia de baixo custo e isenta de licenciamento num cenário de comunicações marítimas. Neste contexto, esta permite fornecer o acesso à Internet em banda larga a grupos de embarcações, que atualmente recorrem a tecnologias de elevado custo (satélite) e/ou de banda estreita (rádios VHF). Com o acesso em banda larga, os proprietários poderão utilizar aplicações informáticas de interesse à atividade de negócio ou de lazer, até então só disponíveis junto à costa onde existe cobertura celular. Nesta tese pretende-se fazer um estudo teórico e prático sobre o alcance e respetivo desempenho de comunicações de banda larga em ambiente marítimo, utilizando parte da gama de frequências dos 5,8 GHz, isenta de licença, e a norma IEEE 802.11n. Para se utilizar equipamento produzido em massa a operar nessa gama, existem duas normas disponíveis, a IEEE 802.11a e a IEEE 802.11n. Optou-se pelo IEEE 802.11n pois os esquemas de codificação ao nível físico permitem débitos mais elevados e MIMO. Para a realização dos testes experimentais, foi necessário elaborar um protótipo de comunicação ponto a ponto, constituído por dois nós de comunicação. Um deles foi instalado numa embarcação de pesca em colaboração com a Associação Propeixe e o outro no Edifício Transparente, no Porto, em colaboração com a entidade gestora do edifício e a Associação Porto Digital. Tanto quanto se conhece é o primeiro teste de comunicações Wi-Fi realizado nestas condições a nível mundial. Os objetivos do trabalho foram atingidos. Foi possível estabelecer comunicações Wi-Fi na banda dos 5,8 GHz até cerca de 7 km com débito médio mínimo de 1 Mbit/s. O ambiente de testes desenvolvido e os resultados obtidos servirão de base para futuros trabalhos de investigação na área das comunicações marítimas.
Resumo:
Dissertation presented in fulfillment of the requirements for the Degree of Doctor of Philosophy in Biology (Molecular Genetics) at the Instituto de Tecnologia Química e Biológica da Universidade Nova de Lisboa
Resumo:
In-network storage of data in wireless sensor networks contributes to reduce the communications inside the network and to favor data aggregation. In this paper, we consider the use of n out of m codes and data dispersal in combination to in-network storage. In particular, we provide an abstract model of in-network storage to show how n out of m codes can be used, and we discuss how this can be achieved in five cases of study. We also define a model aimed at evaluating the probability of correct data encoding and decoding, we exploit this model and simulations to show how, in the cases of study, the parameters of the n out of m codes and the network should be configured in order to achieve correct data coding and decoding with high probability.
Resumo:
RTUWO Advances in Wireless and Optical Communications 2015 (RTUWO 2015). 5-6 Nov Riga, Latvia.