731 resultados para Live Video
Resumo:
OBJECTIVE: To identify risk factors for low birth weight (LBW) among live births by vaginal delivery and to determine if the disappearance of the association between LBW and socioeconomic factors was due to confounding by cesarean section. METHODS: Data were obtained from two population-based cohorts of singleton live births in Ribeirão Preto, Southeastern Brazil. The first one comprised 4,698 newborns from June 1978 to May 1979 and the second included 1,399 infants born from May to August 1994. The risks for LBW were tested in a logistic model, including the interaction of the year of survey and all independent variables under analysis. RESULTS: The incidence of LBW among vaginal deliveries increased from 7.8% in 1978--79 to 10% in 1994. The risk was higher for: female or preterm infants; newborns of non-cohabiting mothers; newborns whose mothers had fewer prenatal visits or few years of education; first-born infants; and those who had smoking mothers. The interaction of the year of survey with gestational age indicated that the risk of LBW among preterm infants fell from 17.75 to 8.71 in 15 years. The mean birth weight decreased more significantly among newborns from qualified families, who also had the highest increase in preterm birth and non-cohabitation. CONCLUSIONS: LBW among vaginal deliveries increased mainly due to a rise in the proportion of preterm births and non-cohabiting mothers. The association between cesarean section and LBW tended to cover up socioeconomic differences in the likelihood of LBW. When vaginal deliveries were analyzed independently, these socioeconomic differences come up again.
Resumo:
This article aims to discuss the role humour plays in politics, particularly in a media environment overflowing with user-generated video. We start with a genealogy of political satire, from classical to Internet times, followed by a general description of “the Downfall meme,” a series of videos on YouTube featuring footage from the film Der Untergang and nonsensical subtitles. Amid video-games, celebrities, and the Internet itself, politicians and politics are the target of such twenty-first century caricatures. By analysing these videos we hope to elucidate how the manipulation of images is embedded in everyday practices and may be of political consequence, namely by deflating politicians' constructed media image. The realm of image, at the centre of the Internet's technological culture, is connected with decisive aspects of today's social structure of knowledge and play. It is timely to understand which part of “playing” is in fact an expressive practice with political significance.
Resumo:
A new high throughput and scalable architecture for unified transform coding in H.264/AVC is proposed in this paper. Such flexible structure is capable of computing all the 4x4 and 2x2 transforms for Ultra High Definition Video (UHDV) applications (4320x7680@ 30fps) in real-time and with low hardware cost. These significantly high performance levels were proven with the implementation of several different configurations of the proposed structure using both FPGA and ASIC 90 nm technologies. In addition, such experimental evaluation also demonstrated the high area efficiency of theproposed architecture, which in terms of Data Throughput per Unit of Area (DTUA) is at least 1.5 times more efficient than its more prominent related designs(1).
Resumo:
The advances made in channel-capacity codes, such as turbo codes and low-density parity-check (LDPC) codes, have played a major role in the emerging distributed source coding paradigm. LDPC codes can be easily adapted to new source coding strategies due to their natural representation as bipartite graphs and the use of quasi-optimal decoding algorithms, such as belief propagation. This paper tackles a relevant scenario in distributedvideo coding: lossy source coding when multiple side information (SI) hypotheses are available at the decoder, each one correlated with the source according to different correlation noise channels. Thus, it is proposed to exploit multiple SI hypotheses through an efficient joint decoding technique withmultiple LDPC syndrome decoders that exchange information to obtain coding efficiency improvements. At the decoder side, the multiple SI hypotheses are created with motion compensated frame interpolation and fused together in a novel iterative LDPC based Slepian-Wolf decoding algorithm. With the creation of multiple SI hypotheses and the proposed decoding algorithm, bitrate savings up to 8.0% are obtained for similar decoded quality.
Resumo:
A novel high throughput and scalable unified architecture for the computation of the transform operations in video codecs for advanced standards is presented in this paper. This structure can be used as a hardware accelerator in modern embedded systems to efficiently compute all the two-dimensional 4 x 4 and 2 x 2 transforms of the H.264/AVC standard. Moreover, its highly flexible design and hardware efficiency allows it to be easily scaled in terms of performance and hardware cost to meet the specific requirements of any given video coding application. Experimental results obtained using a Xilinx Virtex-5 FPGA demonstrated the superior performance and hardware efficiency levels provided by the proposed structure, which presents a throughput per unit of area relatively higher than other similar recently published designs targeting the H.264/AVC standard. Such results also showed that, when integrated in a multi-core embedded system, this architecture provides speedup factors of about 120x concerning pure software implementations of the transform algorithms, therefore allowing the computation, in real-time, of all the above mentioned transforms for Ultra High Definition Video (UHDV) sequences (4,320 x 7,680 @ 30 fps).
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.
Resumo:
Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.
Resumo:
Dissertação apresentada à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Audiovisual e Multimédia.
Resumo:
É do conhecimento geral que muitos professores ensinam os alunos a ouvir, cantar e compor, mas usam testes escritos para avaliar as aprendizagens. Este facto revela que, por um lado, há quem tenha a ideia que os testes escritos estão mesmo a avaliar as práticas musicais e, por outro lado, também há quem considere que é muito difícil e inconsistente avaliar as práticas musicais porque a Música tem um carácter transitório, efémero, imaterial. Além do problema ser interessante, ele é abrangente porque existe tanto entre os professores especialistas de Música como entre os professores generalistas de Educação de Infância e de 1º Ciclo. A experiência levada a cabo nos últimos três anos no Mestrado em Ensino de Educação Musical no Ensino Básico sobre a forma de avaliar as aprendizagens dos alunos em Educação Musical mantém os testes escritos para avaliar os conhecimentos teóricos, mas introduz um instrumento de avaliação das práticas musicais, as grelhas de descritores de desempenho. Estas grelhas, ainda que mantendo princípios orientadores comuns, são sempre feitas à medida para cada situação específica de obra, atividade musical, alunos e são aplicadas não só em observação direta, mas também sobre registos áudio/vídeo. Estes instrumentos são construídos pelos docentes e não em conjunto com os alunos, mas são-lhes apresentados e explicados desde o início de cada unidade didática, usados regulamente para autoavaliação formativa e nas apresentações finais para avaliação sumativa. Desta forma os alunos sabem desde o início onde se espera que cheguem e sabem em cada momento do processo em que ponto se encontram e que problemas/dificuldades musicais devem ser ultrapassados. Em cada atividade comparámos a autoavaliação dos alunos com a avaliação dos professores e verificámos em todas as situações uma elevada correlação positiva (r > 0,9). Teremos ainda que consolidar os dados já obtidos pelo que esperamos o envolvimento de mais professores. A possibilidade de divulgar a solução aqui apresentada, nomeadamente, através de ações de formação contínua, permitirá um evidente aumento da consistência e fiabilidade da avaliação das práticas musicais em Educação Musical.
Resumo:
To mimic the online practices of citizens has been declared an imperative to improve communication and extend participation. This paper seeks to contribute to the understanding of how European discourses praising online video as a communication tool have been translated into actual practices by politicians, governments and organisations. By contrasting official documents with YouTube activity, it is argued that new opportunities for European political communication are far from being fully embraced, much akin to the early years of websites. The main choice has been to use YouTube channels fundamentally for distribution and archiving, thus neglecting its social media features. The disabling of comments by many heads of state and prime ministers - and, in 2010, the European Commission - indicates such an attitude. The few attempts made to foster citizen engagement, in particular during elections, have had limited success, given low participation numbers and lack of argument exchange.
Resumo:
The growing heterogeneity of networks, devices and consumption conditions asks for flexible and adaptive video coding solutions. The compression power of the HEVC standard and the benefits of the distributed video coding paradigm allow designing novel scalable coding solutions with improved error robustness and low encoding complexity while still achieving competitive compression efficiency. In this context, this paper proposes a novel scalable video coding scheme using a HEVC Intra compliant base layer and a distributed coding approach in the enhancement layers (EL). This design inherits the HEVC compression efficiency while providing low encoding complexity at the enhancement layers. The temporal correlation is exploited at the decoder to create the EL side information (SI) residue, an estimation of the original residue. The EL encoder sends only the data that cannot be inferred at the decoder, thus exploiting the correlation between the original and SI residues; however, this correlation must be characterized with an accurate correlation model to obtain coding efficiency improvements. Therefore, this paper proposes a correlation modeling solution to be used at both encoder and decoder, without requiring a feedback channel. Experiments results confirm that the proposed scalable coding scheme has lower encoding complexity and provides BD-Rate savings up to 3.43% in comparison with the HEVC Intra scalable extension under development. © 2014 IEEE.
Resumo:
As high dynamic range video is gaining popularity, video coding solutions able to efficiently provide both low and high dynamic range video, notably with a single bitstream, are increasingly important. While simulcasting can provide both dynamic range videos at the cost of some compression efficiency penalty, bit-depth scalable video coding can provide a better trade-off between compression efficiency, adaptation flexibility and computational complexity. Considering the widespread use of H.264/AVC video, this paper proposes a H.264/AVC backward compatible bit-depth scalable video coding solution offering a low dynamic range base layer and two high dynamic range enhancement layers with different qualities, at low complexity. Experimental results show that the proposed solution has an acceptable rate-distortion performance penalty regarding the HDR H.264/AVC single-layer coding solution.
Resumo:
In video communication systems, the video signals are typically compressed and sent to the decoder through an error-prone transmission channel that may corrupt the compressed signal, causing the degradation of the final decoded video quality. In this context, it is possible to enhance the error resilience of typical predictive video coding schemes using as inspiration principles and tools from an alternative video coding approach, the so-called Distributed Video Coding (DVC), based on the Distributed Source Coding (DSC) theory. Further improvements in the decoded video quality after error-prone transmission may also be obtained by considering the perceptual relevance of the video content, as distortions occurring in different regions of a picture have a different impact on the user's final experience. In this context, this paper proposes a Perceptually Driven Error Protection (PDEP) video coding solution that enhances the error resilience of a state-of-the-art H.264/AVC predictive video codec using DSC principles and perceptual considerations. To increase the H.264/AVC error resilience performance, the main technical novelties brought by the proposed video coding solution are: (i) design of an improved compressed domain perceptual classification mechanism; (ii) design of an improved transcoding tool for the DSC-based protection mechanism; and (iii) integration of a perceptual classification mechanism in an H.264/AVC compliant codec with a DSC-based error protection mechanism. The performance results obtained show that the proposed PDEP video codec provides a better performing alternative to traditional error protection video coding schemes, notably Forward Error Correction (FEC)-based schemes. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Electronics Letters Vol.38, nº 19