830 resultados para Programmazione video-giochi, iOS, Game Engine, Cocos2D
Resumo:
A new high throughput and scalable architecture for unified transform coding in H.264/AVC is proposed in this paper. Such flexible structure is capable of computing all the 4x4 and 2x2 transforms for Ultra High Definition Video (UHDV) applications (4320x7680@ 30fps) in real-time and with low hardware cost. These significantly high performance levels were proven with the implementation of several different configurations of the proposed structure using both FPGA and ASIC 90 nm technologies. In addition, such experimental evaluation also demonstrated the high area efficiency of theproposed architecture, which in terms of Data Throughput per Unit of Area (DTUA) is at least 1.5 times more efficient than its more prominent related designs(1).
Resumo:
The advances made in channel-capacity codes, such as turbo codes and low-density parity-check (LDPC) codes, have played a major role in the emerging distributed source coding paradigm. LDPC codes can be easily adapted to new source coding strategies due to their natural representation as bipartite graphs and the use of quasi-optimal decoding algorithms, such as belief propagation. This paper tackles a relevant scenario in distributedvideo coding: lossy source coding when multiple side information (SI) hypotheses are available at the decoder, each one correlated with the source according to different correlation noise channels. Thus, it is proposed to exploit multiple SI hypotheses through an efficient joint decoding technique withmultiple LDPC syndrome decoders that exchange information to obtain coding efficiency improvements. At the decoder side, the multiple SI hypotheses are created with motion compensated frame interpolation and fused together in a novel iterative LDPC based Slepian-Wolf decoding algorithm. With the creation of multiple SI hypotheses and the proposed decoding algorithm, bitrate savings up to 8.0% are obtained for similar decoded quality.
Resumo:
In this paper is presented a Game Theory based methodology to allocate transmission costs, considering cooperation and competition between producers. As original contribution, it finds the degree of participation on the additional costs according to the demand behavior. A comparative study was carried out between the obtained results using Nucleolus balance and Shapley Value, with other techniques such as Averages Allocation method and the Generalized Generation Distribution Factors method (GGDF). As example, a six nodes network was used for the simulations. The results demonstrate the ability to find adequate solutions on open access environment to the networks.
Resumo:
The salient feature of liquid crystal elastomers and networks is strong coupling between orientational order and mechanical strain. Orientational order can be changed by a wide variety of stimuli, including the presence of moisture. Changes in the orientation of constituents give rise to stresses and strains, which result in changes in sample shape. We have utilized this effect to build soft cellulose-based motor driven by humidity. The motor consists of a circular loop of cellulose film, which passes over two wheels. When humid air is present near one of the wheels on one side of the film, with drier air elsewhere, rotation of the wheels results. As the wheels rotate, the humid film dries. The motor runs so long as the difference in humidity is maintained. Our cellulose liquid crystal motor thus extracts mechanical work from a difference in humidity.
Resumo:
A novel high throughput and scalable unified architecture for the computation of the transform operations in video codecs for advanced standards is presented in this paper. This structure can be used as a hardware accelerator in modern embedded systems to efficiently compute all the two-dimensional 4 x 4 and 2 x 2 transforms of the H.264/AVC standard. Moreover, its highly flexible design and hardware efficiency allows it to be easily scaled in terms of performance and hardware cost to meet the specific requirements of any given video coding application. Experimental results obtained using a Xilinx Virtex-5 FPGA demonstrated the superior performance and hardware efficiency levels provided by the proposed structure, which presents a throughput per unit of area relatively higher than other similar recently published designs targeting the H.264/AVC standard. Such results also showed that, when integrated in a multi-core embedded system, this architecture provides speedup factors of about 120x concerning pure software implementations of the transform algorithms, therefore allowing the computation, in real-time, of all the above mentioned transforms for Ultra High Definition Video (UHDV) sequences (4,320 x 7,680 @ 30 fps).
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.
Resumo:
Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.
Resumo:
Dissertação de Mestrado, Ciências Económicas e Empresariais, 8 de Janeiro de 2016, Universidade dos Açores.
Resumo:
To mimic the online practices of citizens has been declared an imperative to improve communication and extend participation. This paper seeks to contribute to the understanding of how European discourses praising online video as a communication tool have been translated into actual practices by politicians, governments and organisations. By contrasting official documents with YouTube activity, it is argued that new opportunities for European political communication are far from being fully embraced, much akin to the early years of websites. The main choice has been to use YouTube channels fundamentally for distribution and archiving, thus neglecting its social media features. The disabling of comments by many heads of state and prime ministers - and, in 2010, the European Commission - indicates such an attitude. The few attempts made to foster citizen engagement, in particular during elections, have had limited success, given low participation numbers and lack of argument exchange.
Resumo:
The TIMEMESH game, developed in the scope of the European Project SELEAG, is an educational game for learning history, culture and social relations. It is supported by an extensible, online, multi-language, multi-player, collaborative and social platform for sharing and acquiring knowledge of the history of European regions. The game has been already used, with remarkable success, in different European countries like Portugal, Spain, England, Slovenia, Estonia and Belgium.
Resumo:
Serious games are starting to attain a higher role as tools for learning in various contexts, but in particular in areas such as education and training. Due to its characteristics, such as rules, behavior simulation and feedback to the player's actions, serious games provide a favorable learning environment where errors can occur without real life penalty and students get instant feedback from challenges. These challenges are in accordance with the intended objectives and will self-adapt and repeat according to the student’s difficulty level. Through motivating and engaging environments, which serve as base for problem solving and simulation of different situations and contexts, serious games have a great potential to aid players developing professional skills. But, how do we certify the acquired knowledge and skills? With this work we intend to propose a methodology to establish a relationship between the game mechanics of serious games and an array of competences for certification, evaluating the applicability of various aspects in the design and development of games such as the user interfaces and the gameplay, obtaining learning outcomes within the game itself. Through the definition of game mechanics combined with the necessary pedagogical elements, the game will ensure the certification. This paper will present a matrix of generic skills, based on the European Framework of Qualifications, and the definition of the game mechanics necessary for certification on tour guide training context. The certification matrix has as reference axes: skills, knowledge and competencies, which describe what the students should learn, understand and be able to do after they complete the learning process. The guides-interpreters welcome and accompany tourists on trips and visits to places of tourist interest and cultural heritage such as museums, palaces and national monuments, where they provide various information. Tour guide certification requirements include skills and specific knowledge about foreign languages and in the areas of History, Ethnology, Politics, Religion, Geography and Art of the territory where it is inserted. These skills are communication, interpersonal relationships, motivation, organization and management. This certification process aims to validate the skills to plan and conduct guided tours on the territory, demonstrate knowledge appropriate to the context and finally match a good group leader. After defining which competences are to be certified, the next step is to delineate the expected learning outcomes, as well as identify the game mechanics associated with it. The game mechanics, as methods invoked by agents for interaction with the game world, in combination with game elements/objects allows multiple paths through which to explore the game environment and its educational process. Mechanics as achievements, appointments, progression, reward schedules or status, describe how game can be designed to affect players in unprecedented ways. In order for the game to be able to certify tour guides, the design of the training game will incorporate a set of theoretical and practical tasks to acquire skills and knowledge of various transversal themes. For this end, patterns of skills and abilities in acquiring different knowledge will be identified.
Resumo:
Serious games are games where the entertainment aspect is not the most relevant motivation or objective. TimeMesh is an online, multi-language, multiplayer, collaborative and social game platform for sharing and acquiring knowledge of the history of European regions. As such it is a serious game with educational characteristics. This article evaluates the use of TimeMesh with students of 13 and 14 years-old. It shows that this game is already a significant learning tool about European citizenship.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Redes de Comunicação e Multimédia
Resumo:
The growing heterogeneity of networks, devices and consumption conditions asks for flexible and adaptive video coding solutions. The compression power of the HEVC standard and the benefits of the distributed video coding paradigm allow designing novel scalable coding solutions with improved error robustness and low encoding complexity while still achieving competitive compression efficiency. In this context, this paper proposes a novel scalable video coding scheme using a HEVC Intra compliant base layer and a distributed coding approach in the enhancement layers (EL). This design inherits the HEVC compression efficiency while providing low encoding complexity at the enhancement layers. The temporal correlation is exploited at the decoder to create the EL side information (SI) residue, an estimation of the original residue. The EL encoder sends only the data that cannot be inferred at the decoder, thus exploiting the correlation between the original and SI residues; however, this correlation must be characterized with an accurate correlation model to obtain coding efficiency improvements. Therefore, this paper proposes a correlation modeling solution to be used at both encoder and decoder, without requiring a feedback channel. Experiments results confirm that the proposed scalable coding scheme has lower encoding complexity and provides BD-Rate savings up to 3.43% in comparison with the HEVC Intra scalable extension under development. © 2014 IEEE.