794 resultados para Scalable Video Codec
Resumo:
A recente norma IEEE 802.11n oferece um elevado débito em redes locais sem fios sendo por isso esperado uma adopção massiva desta tecnologia substituindo progressivamente as redes 802.11b/g. Devido à sua elevada capacidade esta recente geração de redes sem fios 802.11n permite um crescimento acentuado de serviços audiovisuais. Neste contexto esta dissertação procura estudar a rede 802.11n, caracterizando o desempenho e a qualidade associada a um serviço de transmissão de vídeo, recorrendo para o efeito a uma arquitectura de simulação da rede 802.11n. Desta forma é caracterizado o impacto das novas funcionalidades da camada MAC introduzidas na norma 801.11n, como é o caso da agregação A-MSDU e A-MPDU, bem como o impacto das novas funcionalidades da camada física como é o caso do MIMO; em ambos os casos uma optimização da parametrização é realizada. Também se verifica que as principais técnicas de codificação de vídeo H.264/AVC para optimizar o processo de distribuição de vídeo, permitem optimizar o desempenho global do sistema de transmissão. Aliando a optimização e parametrização da camada MAC, da camada física, e do processo de codificação, é possível propor um conjunto de configurações que permitem obter o melhor desempenho na qualidade de serviço da transmissão de conteúdos de vídeo numa rede 802.11n. A arquitectura de simulação construída nesta dissertação é especificamente adaptada para suportar as técnicas de agregação da camada MAC, bem como para suportar o encapsulamento em protocolos de rede que permitem a transmissão dos pacotes de vídeo RTP, codificados em H.264/AVC.
Resumo:
This article aims to discuss the role humour plays in politics, particularly in a media environment overflowing with user-generated video. We start with a genealogy of political satire, from classical to Internet times, followed by a general description of “the Downfall meme,” a series of videos on YouTube featuring footage from the film Der Untergang and nonsensical subtitles. Amid video-games, celebrities, and the Internet itself, politicians and politics are the target of such twenty-first century caricatures. By analysing these videos we hope to elucidate how the manipulation of images is embedded in everyday practices and may be of political consequence, namely by deflating politicians' constructed media image. The realm of image, at the centre of the Internet's technological culture, is connected with decisive aspects of today's social structure of knowledge and play. It is timely to understand which part of “playing” is in fact an expressive practice with political significance.
Resumo:
The advances made in channel-capacity codes, such as turbo codes and low-density parity-check (LDPC) codes, have played a major role in the emerging distributed source coding paradigm. LDPC codes can be easily adapted to new source coding strategies due to their natural representation as bipartite graphs and the use of quasi-optimal decoding algorithms, such as belief propagation. This paper tackles a relevant scenario in distributedvideo coding: lossy source coding when multiple side information (SI) hypotheses are available at the decoder, each one correlated with the source according to different correlation noise channels. Thus, it is proposed to exploit multiple SI hypotheses through an efficient joint decoding technique withmultiple LDPC syndrome decoders that exchange information to obtain coding efficiency improvements. At the decoder side, the multiple SI hypotheses are created with motion compensated frame interpolation and fused together in a novel iterative LDPC based Slepian-Wolf decoding algorithm. With the creation of multiple SI hypotheses and the proposed decoding algorithm, bitrate savings up to 8.0% are obtained for similar decoded quality.
Resumo:
In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.
Resumo:
Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.
Resumo:
A new high performance architecture for the computation of all the DCT operations adopted in the H.264/AVC and HEVC standards is proposed in this paper. Contrasting to other dedicated transform cores, the presented multi-standard transform architecture is supported on a completely configurable, scalable and unified structure, that is able to compute not only the forward and the inverse 8×8 and 4×4 integer DCTs and the 4×4 and 2×2 Hadamard transforms defined in the H.264/AVC standard, but also the 4×4, 8×8, 16×16 and 32×32 integer transforms adopted in HEVC. Experimental results obtained using a Xilinx Virtex-7 FPGA demonstrated the superior performance and hardware efficiency levels provided by the proposed structure, which outperforms its more prominent related designs by at least 1.8 times. When integrated in a multi-core embedded system, this architecture allows the computation, in real-time, of all the transforms mentioned above for resolutions as high as the 8k Ultra High Definition Television (UHDTV) (7680×4320 @ 30fps).
Resumo:
To mimic the online practices of citizens has been declared an imperative to improve communication and extend participation. This paper seeks to contribute to the understanding of how European discourses praising online video as a communication tool have been translated into actual practices by politicians, governments and organisations. By contrasting official documents with YouTube activity, it is argued that new opportunities for European political communication are far from being fully embraced, much akin to the early years of websites. The main choice has been to use YouTube channels fundamentally for distribution and archiving, thus neglecting its social media features. The disabling of comments by many heads of state and prime ministers - and, in 2010, the European Commission - indicates such an attitude. The few attempts made to foster citizen engagement, in particular during elections, have had limited success, given low participation numbers and lack of argument exchange.
Resumo:
Conferência: IEEE 24th International Conference on Application-Specific Systems, Architectures and Processors (ASAP)- Jun 05-07, 2013
Resumo:
Consider the problem of designing an algorithm for acquiring sensor readings. Consider specifically the problem of obtaining an approximate representation of sensor readings where (i) sensor readings originate from different sensor nodes, (ii) the number of sensor nodes is very large, (iii) all sensor nodes are deployed in a small area (dense network) and (iv) all sensor nodes communicate over a communication medium where at most one node can transmit at a time (a single broadcast domain). We present an efficient algorithm for this problem, and our novel algorithm has two desired properties: (i) it obtains an interpolation based on all sensor readings and (ii) it is scalable, that is, its time-complexity is independent of the number of sensor nodes. Achieving these two properties is possible thanks to the close interlinking of the information processing algorithm, the communication system and a model of the physical world.
Resumo:
Network control systems (NCSs) are spatially distributed systems in which the communication between sensors, actuators and controllers occurs through a shared band-limited digital communication network. However, the use of a shared communication network, in contrast to using several dedicated independent connections, introduces new challenges which are even more acute in large scale and dense networked control systems. In this paper we investigate a recently introduced technique of gathering information from a dense sensor network to be used in networked control applications. Obtaining efficiently an approximate interpolation of the sensed data is exploited as offering a good tradeoff between accuracy in the measurement of the input signals and the delay to the actuation. These are important aspects to take into account for the quality of control. We introduce a variation to the state-of-the-art algorithms which we prove to perform relatively better because it takes into account the changes over time of the input signal within the process of obtaining an approximate interpolation.
Resumo:
The availability of small inexpensive sensor elements enables the employment of large wired or wireless sensor networks for feeding control systems. Unfortunately, the need to transmit a large number of sensor measurements over a network negatively affects the timing parameters of the control loop. This paper presents a solution to this problem by representing sensor measurements with an approximate representation-an interpolation of sensor measurements as a function of space coordinates. A priority-based medium access control (MAC) protocol is used to select the sensor messages with high information content. Thus, the information from a large number of sensor measurements is conveyed within a few messages. This approach greatly reduces the time for obtaining a snapshot of the environment state and therefore supports the real-time requirements of feedback control loops.
Resumo:
Cooperating objects (COs) is a recently coined term used to signify the convergence of classical embedded computer systems, wireless sensor networks and robotics and control. We present essential elements of a reference architecture for scalable data processing for the CO paradigm.
Resumo:
In this paper, we focus on large-scale and dense Cyber- Physical Systems, and discuss methods that tightly integrate communication and computing with the underlying physical environment. We present Physical Dynamic Priority Dominance ((PD)2) protocol that exemplifies a key mechanism to devise low time-complexity communication protocols for large-scale networked sensor systems. We show that using this mechanism, one can compute aggregate quantities such as the maximum or minimum of sensor readings in a time-complexity that is equivalent to essentially one message exchange. We also illustrate the use of this mechanism in a more complex task of computing the interpolation of smooth as well as non-smooth sensor data in very low timecomplexity.
Resumo:
A unified architecture for fast and efficient computation of the set of two-dimensional (2-D) transforms adopted by the most recent state-of-the-art digital video standards is presented in this paper. Contrasting to other designs with similar functionality, the presented architecture is supported on a scalable, modular and completely configurable processing structure. This flexible structure not only allows to easily reconfigure the architecture to support different transform kernels, but it also permits its resizing to efficiently support transforms of different orders (e. g. order-4, order-8, order-16 and order-32). Consequently, not only is it highly suitable to realize high-performance multi-standard transform cores, but it also offers highly efficient implementations of specialized processing structures addressing only a reduced subset of transforms that are used by a specific video standard. The experimental results that were obtained by prototyping several configurations of this processing structure in a Xilinx Virtex-7 FPGA show the superior performance and hardware efficiency levels provided by the proposed unified architecture for the implementation of transform cores for the Advanced Video Coding (AVC), Audio Video coding Standard (AVS), VC-1 and High Efficiency Video Coding (HEVC) standards. In addition, such results also demonstrate the ability of this processing structure to realize multi-standard transform cores supporting all the standards mentioned above and that are capable of processing the 8k Ultra High Definition Television (UHDTV) video format (7,680 x 4,320 at 30 fps) in real time.
Resumo:
Electronics Letters Vol.38, nº 19