909 resultados para coding complexity


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The widespread use of wireless enabled devices and the increasing capabilities of wireless technologies has promoted multimedia content access and sharing among users. However, the quality perceived by the users still depends on multiple factors such as video characteristics, device capabilities, and link quality. While video characteristics include the video time and spatial complexity as well as the coding complexity, one of the most important device characteristics is the battery lifetime. There is the need to assess how these aspects interact and how they impact the overall user satisfaction. This paper advances previous works by proposing and validating a flexible framework, named EViTEQ, to be applied in real testbeds to satisfy the requirements of performance assessment. EViTEQ is able to measure network interface energy consumption with high precision, while being completely technology independent and assessing the application level quality of experience. The results obtained in the testbed show the relevance of combined multi-criteria measurement approaches, leading to superior end-user satisfaction perception evaluation .

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Around 98% of all transcriptional output in humans is noncoding RNA. RNA-mediated gene regulation is widespread in higher eukaryotes and complex genetic phenomena like RNA interference, co-suppression, transgene silencing, imprinting, methylation, and possibly position-effect variegation and transvection, all involve intersecting pathways based on or connected to RNA signaling. I suggest that the central dogma is incomplete, and that intronic and other non-coding RNAs have evolved to comprise a second tier of gene expression in eukaryotes, which enables the integration and networking of complex suites of gene activity. Although proteins are the fundamental effectors of cellular function, the basis of eukaryotic complexity and phenotypic variation may lie primarily in a control architecture composed of a highly parallel system of trans-acting RNAs that relay state information required for the coordination and modulation of gene expression, via chromatin remodeling, RNA-DNA, RNA-RNA and RNA-protein interactions. This system has interesting and perhaps informative analogies with small world networks and dataflow computing.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Motion compensated frame interpolation (MCFI) is one of the most efficient solutions to generate side information (SI) in the context of distributed video coding. However, it creates SI with rather significant motion compensated errors for some frame regions while rather small for some other regions depending on the video content. In this paper, a low complexity Infra mode selection algorithm is proposed to select the most 'critical' blocks in the WZ frame and help the decoder with some reliable data for those blocks. For each block, the novel coding mode selection algorithm estimates the encoding rate for the Intra based and WZ coding modes and determines the best coding mode while maintaining a low encoder complexity. The proposed solution is evaluated in terms of rate-distortion performance with improvements up to 1.2 dB regarding a WZ coding mode only solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As high dynamic range video is gaining popularity, video coding solutions able to efficiently provide both low and high dynamic range video, notably with a single bitstream, are increasingly important. While simulcasting can provide both dynamic range videos at the cost of some compression efficiency penalty, bit-depth scalable video coding can provide a better trade-off between compression efficiency, adaptation flexibility and computational complexity. Considering the widespread use of H.264/AVC video, this paper proposes a H.264/AVC backward compatible bit-depth scalable video coding solution offering a low dynamic range base layer and two high dynamic range enhancement layers with different qualities, at low complexity. Experimental results show that the proposed solution has an acceptable rate-distortion performance penalty regarding the HDR H.264/AVC single-layer coding solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The comparison of complete genomes has revealed surprisingly large numbers of conserved non-protein-coding (CNC) DNA regions. However, the biological function of CNC remains elusive. CNC differ in two aspects from conserved protein-coding regions. They are not conserved across phylum boundaries, and they do not contain readily detectable sub-domains. Here we characterize the persistence length and time of CNC and conserved protein-coding regions in the vertebrate and insect lineages. RESULTS: The persistence length is the length of a genome region over which a certain level of sequence identity is consistently maintained. The persistence time is the evolutionary period during which a conserved region evolves under the same selective constraints.Our main findings are: (i) Insect genomes contain 1.60 times less conserved information than vertebrates; (ii) Vertebrate CNC have a higher persistence length than conserved coding regions or insect CNC; (iii) CNC have shorter persistence times as compared to conserved coding regions in both lineages. CONCLUSION: Higher persistence length of vertebrate CNC indicates that the conserved information in vertebrates and insects is organized in functional elements of different lengths. These findings might be related to the higher morphological complexity of vertebrates and give clues about the structure of active CNC elements.Shorter persistence time might explain the previously puzzling observations of highly conserved CNC within each phylum, and of a lack of conservation between phyla. It suggests that CNC divergence might be a key factor in vertebrate evolution. Further evolutionary studies will help to relate individual CNC to specific developmental processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clonally complex infections by Mycobacterium tuberculosis are progressively more accepted. Studies of their dimension in epidemiological scenarios where the infective pressure is not high are scarce. Our study systematically searched for clonally complex infections (mixed infections by more than one strain and simultaneous presence of clonal variants) by applying mycobacterial interspersed repetitive-unit (MIRU)-variable-number tandem-repeat (VNTR) analysis to M. tuberculosis isolates from two population-based samples of respiratory (703 cases) and respiratory-extrapulmonary (R+E) tuberculosis (TB) cases (71 cases) in a context of moderate TB incidence. Clonally complex infections were found in 11 (1.6%) of the respiratory TB cases and in 10 (14.1%) of those with R+E TB. Among the 21 cases with clonally complex TB, 9 were infected by 2 independent strains and the remaining 12 showed the simultaneous presence of 2 to 3 clonal variants. For the 10 R+E TB cases with clonally complex infections, compartmentalization (different compositions of strains/clonal variants in independent infected sites) was found in 9 of them. All the strains/clonal variants were also genotyped by IS6110-based restriction fragment length polymorphism analysis, which split two MIRU-defined clonal variants, although in general, it showed a lower discriminatory power to identify the clonal heterogeneity revealed by MIRU-VNTR analysis. The comparative analysis of IS6110 insertion sites between coinfecting clonal variants showed differences in the genes coding for a cutinase, a PPE family protein, and two conserved hypothetical proteins. Diagnostic delay, existence of previous TB, risk for overexposure, and clustered/orphan status of the involved strains were analyzed to propose possible explanations for the cases with clonally complex infections. Our study characterizes in detail all the clonally complex infections by M. tuberculosis found in a systematic survey and contributes to the characterization that these phenomena can be found to an extent higher than expected, even in an unselected population-based sample lacking high infective pressure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the extent of genomic transcription and its functional relevance is a central goal in genomics research. However, detailed genome-wide investigations of transcriptome complexity in major mammalian organs have been scarce. Here, using extensive RNA-seq data, we show that transcription of the genome is substantially more widespread in the testis than in other organs across representative mammals. Furthermore, we reveal that meiotic spermatocytes and especially postmeiotic round spermatids have remarkably diverse transcriptomes, which explains the high transcriptome complexity of the testis as a whole. The widespread transcriptional activity in spermatocytes and spermatids encompasses protein-coding and long noncoding RNA genes but also poorly conserves intergenic sequences, suggesting that it may not be of immediate functional relevance. Rather, our analyses of genome-wide epigenetic data suggest that this prevalent transcription, which most likely promoted the birth of new genes during evolution, is facilitated by an overall permissive chromatin in these germ cells that results from extensive chromatin remodeling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In wireless communications the transmitted signals may be affected by noise. The receiver must decode the received message, which can be mathematically modelled as a search for the closest lattice point to a given vector. This problem is known to be NP-hard in general, but for communications applications there exist algorithms that, for a certain range of system parameters, offer polynomial expected complexity. The purpose of the thesis is to study the sphere decoding algorithm introduced in the article On Maximum-Likelihood Detection and the Search for the Closest Lattice Point, which was published by M.O. Damen, H. El Gamal and G. Caire in 2003. We concentrate especially on its computational complexity when used in space–time coding. Computer simulations are used to study how different system parameters affect the computational complexity of the algorithm. The aim is to find ways to improve the algorithm from the complexity point of view. The main contribution of the thesis is the construction of two new modifications to the sphere decoding algorithm, which are shown to perform faster than the original algorithm within a range of system parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Williams syndrome (WS) is a rare genetic disorder with a unique cognitive profile in which verbal abilities are markedly stronger than visuospatial abilities. This study investigated the claim that orientation coding is a specific deficit within the visuospatial domain in WS. Experiment I employed a simplified version of the Benton Judgement of Line Orientation task and a control, length-matching task. Results demonstrated comparable levels of orientation matching performance in the group with WS and a group of typically developing (TD) controls matched by nonverbal ability, although it is possible that floor effects masked group differences. A group difference was observed in the length-matching task due to stronger performance from the control group. Experiment 2 employed an orientation-discrimination task and a length-discrimination task. Contrary to previous reports, the results showed that individuals with WS were able to code by orientation to a comparable level as that of their matched controls. This demonstrates that, although some impairment is apparent, orientation coding does not represent a specific deficit in WS. Comparison between Experiments I and 2 suggests that orientation coding is vulnerable to task complexity. However, once again, this vulnerability does not appear to be specific to the population with WS, as it was also apparent in the TD controls.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A parallel interference cancellation (PIC) detection scheme is proposed to suppress the impact of imperfect synchronisation. By treating as interference the extra components in the received signal caused by timing misalignment, the PIC detector not only offers much improved performance but also retains a low structural and computational complexity.