921 resultados para video game
Resumo:
Abstract of paper delivered at the 17th International Reversal Theory Conference, Day 3, session 4, 01.07.15
Resumo:
This paper reports on the first known empirical use of the Reversal Theory State Measure (RTSM) since its publication by Desselles et al. (2014). The RTSM was employed to track responses to three purposely-selected video commercials in a between-subjects design. Results of the study provide empirical support for the central conceptual premise of reversal theory, the experience of metamotivational reversals and the ability of the RTSM to capture them. The RTSM was also found to be psychometrically sound after adjustments were made to two of its three component subscales. Detailed account and rationale is provided for the analytical process of assessing the psychometric robustness of the RTSM, with a number of techniques and interpretations relating to component structure and reliability discussed. Agreeability and critique of the two available versions of the RTSM – the bundled and the branched – is also examined. Researchers are encouraged to assist development of the RTSM through further use, taking into account the analysis and recommendations presented.
Resumo:
This paper presents a new rate-control algorithm for live video streaming over wireless IP networks, which is based on selective frame discarding. In the proposed mechanism excess 'P' frames are dropped from the output queue at the sender using a congestion estimate based on packet loss statistics obtained from RTCP feedback and from the Data Link (DL) layer. The performance of the algorithm is evaluated through computer simulation. This paper also presents a characterisation of packet losses owing to transmission errors and congestion, which can help in choosing appropriate strategies to maximise the video quality experienced by the end user. Copyright © 2007 Inderscience Enterprises Ltd.
Resumo:
The Internet as a video distribution medium has seen a tremendous growth in recent years. Currently, the transmission of major live events and TV channels over the Internet can easily reach hundreds or millions of users trying to receive the same content using very distinct receiver terminals, placing both scalability and heterogeneity challenges to content and network providers. In private and well-managed Internet Protocol (IP) networks these types of distributions are supported by specially designed architectures, complemented with IP Multicast protocols and Quality of Service (QoS) solutions. However, the Best-Effort and Unicast nature of the Internet requires the introduction of a new set of protocols and related architectures to support the distribution of these contents. In the field of file and non-real time content distributions this has led to the creation and development of several Peer-to-Peer protocols that have experienced great success in recent years. This chapter presents the current research and developments in Peer-to-Peer video streaming over the Internet. A special focus is made on peer protocols, associated architectures and video coding techniques. The authors also review and describe current Peer-to-Peer streaming solutions. © 2013, IGI Global.
Resumo:
The Joint Video Team, composed by the ISO/IEC Moving Picture Experts Group (MPEG) and the ITU-T Video Coding Experts Group (VCEG), has standardized a scalable extension of the H.264/AVC video coding standard called Scalable Video Coding (SVC). H.264/SVC provides scalable video streams which are composed by a base layer and one or more enhancement layers. Enhancement layers may improve the temporal, the spatial or the signal-to-noise ratio resolutions of the content represented by the lower layers. One of the applications, of this standard is related to video transmission in both wired and wireless communication systems, and it is therefore important to analyze in which way packet losses contribute to the degradation of quality, and which mechanisms could be used to improve that quality. This paper provides an analysis and evaluation of H.264/SVC in error prone environments, quantifying the degradation caused by packet losses in the decoded video. It also proposes and analyzes the consequences of QoS-based discarding of packets through different marking solutions.
Resumo:
The number of software applications available on the Internet for distributing video streams in real time over P2P networks has grown quickly in the last two years. Typical this kind of distribution is made by television channel broadcasters which try to make their content globally available, using viewer's resources to support a large scale distribution of video without incurring in incremental costs. However, the lack of adaptation in video quality, combined with the lack of a standard protocol for this kind of multimedia distribution has driven content providers to basically ignore it as a solution for video delivery over the Internet. While the scalable extension of the H. 264 encoding (H.264/SVC) can be used to support terminal and network heterogeneity, it is not clear how it can be integrated in a P2P overlay to form a large scale and real time distribution. In this paper, we start by defining a solution that combines the most popular P2P file-sharing protocol, the BitTorrent, with the H. 264/SVC encoding for a real-time video content delivery. Using this solution we then evaluate the effect of several parameters in the quality received by peers.
Resumo:
We compare the effect of different text segmentation strategies on speech based passage retrieval of video. Passage retrieval has mainly been studied to improve document retrieval and to enable question answering. In these domains best results were obtained using passages defined by the paragraph structure of the source documents or by using arbitrary overlapping passages. For the retrieval of relevant passages in a video, using speech transcripts, no author defined segmentation is available. We compare retrieval results from 4 different types of segments based on the speech channel of the video: fixed length segments, a sliding window, semantically coherent segments and prosodic segments. We evaluated the methods on the corpus of the MediaEval 2011 Rich Speech Retrieval task. Our main conclusion is that the retrieval results highly depend on the right choice for the segment length. However, results using the segmentation into semantically coherent parts depend much less on the segment length. Especially, the quality of fixed length and sliding window segmentation drops fast when the segment length increases, while quality of the semantically coherent segments is much more stable. Thus, if coherent segments are defined, longer segments can be used and consequently less segments have to be considered at retrieval time.
Resumo:
Tese de doutoramento, Informática (Engenharia Informática), Universidade de Lisboa, Faculdade de Ciências, 2014
Resumo:
The Computer Game industry is big business, the demand for graduates is high, indeed there is a continuing shortage of skilled employees. As with most professions, the skill set required is both specific and diverse. There are currently over 30 Higher Education Institutions (HEIs) in the UK offering Computer games related courses. We expect that as the demand from the industry is sustained, more HEIs will respond with the introduction of game-related degrees. This is quite a considerable undertaking involving many issues from integration of new modules or complete courses within the existing curriculum, to staff development. In this paper we share our experiences of introducing elements of game development into our curriculum. This has occurred over the past two years, starting with the inclusion of elements of game development into existing programming modules, followed by the validation of complete modules, and culminating in a complete degree course. Our experience is that our adopting a progressive approach to development, spread over a number of years, was crucial in achieving a successful outcome.
Resumo:
Tese de doutoramento, Geologia (Geodinâmica Externa), Universidade de Lisboa, Faculdade de Ciências, 2014
Resumo:
During the late twentieth century, the United Kingdom’s football infrastructure and spectatorship underwent transformation as successive stadia disasters heightened political and public scrutiny of the game and prompted industry change. Central to this process was the government’s formation of an independent charitable organization to oversee subsequent policy implementation and grant-aid provision to clubs for safety, crowd, and spectator requirements. This entity, which began in 1975 focusing on ground improvement, developed into the Football Trust. The Trust was funded directly by the football pools companies who ran popular low-stakes football betting enterprises. Working in association with the Pools Promoters Association (PPA), and demonstrating their social responsibility towards the game’s constituents, the pools resourced a wide array of Trust activities. Yet irrespective of government mandate, the PPA and Trust were continually confronted by political and economic obstacles that threatened the effectiveness of their arrangements. In this paper the history of the Football Trust is investigated, along with its partnership with the PPA, and its relationship with the government within the context of broader political shifts, stadia catastrophes, official inquiries, and commercial threats. It is contended that while the Trust/PPA partnership had a respectable legacy, their history afforded little protection against adverse contemporary conditions.
Resumo:
Thesis (Ph.D.)--University of Washington, 2015
Resumo:
This practice-based PhD is comprised of two interrelated elements: (i) ‘(un)childhood’, a 53’ video-essay shown on two screens; and (ii) a 58286 word written thesis. The project, which is contextualised within the tradition of artists working with their own children on time-based art projects, explores a new approach to timebased artistic work about childhood. While Stan Brakhage (1933-2003), Ernie Gher (1943-), Erik Bullot (1963-) and Mary Kelly (1941-) all documented, photographed and filmed their children over a period of years to produce art projects (experimental films and a time-based installation), these projects were implicitly underpinned by a construction of childhood in which children, shown as they grow, represent the abstract primitive subject. The current project challenges the convention of representing children entirely from the adult’s point of view, as aesthetic objects without a voice, as well as through the artist’s chronological approach to time. Instead, this project focuses on the relational joining of the child’s and adult’s points of view. The artist worked on a video project with her own son over a four-and-a-half year period (between the ages of 5 and 10) through which she developed her ‘relational video-making’ methodology. The video-essay (un)childhood performs the relational voices of childhood as resulting from the verbal interactions of both children and adults. The non-chronological nature of(un)childhood offers an alternative to the linear-temporal approach to the representation of childhood. Through montage and a number of literal allusions to time in its dialogue, (un)childhood performs the relational times of childhood by combining children’s lives in the present with the temporal dimensions that have traditionally constructed childhood: past, future and timeless.
Resumo:
Data registration refers to a series of techniques for matching or bringing similar objects or datasets together into alignment. These techniques enjoy widespread use in a diverse variety of applications, such as video coding, tracking, object and face detection and recognition, surveillance and satellite imaging, medical image analysis and structure from motion. Registration methods are as numerous as their manifold uses, from pixel level and block or feature based methods to Fourier domain methods. This book is focused on providing algorithms and image and video techniques for registration and quality performance metrics. The authors provide various assessment metrics for measuring registration quality alongside analyses of registration techniques, introducing and explaining both familiar and state–of–the–art registration methodologies used in a variety of targeted applications.
Resumo:
Rapid developments in display technologies, digital printing, imaging sensors, image processing and image transmission are providing new possibilities for creating and conveying visual content. In an age in which images and video are ubiquitous and where mobile, satellite, and three-dimensional (3-D) imaging have become ordinary experiences, quantification of the performance of modern imaging systems requires appropriate approaches. At the end of the imaging chain, a human observer must decide whether images and video are of a satisfactory visual quality. Hence the measurement and modeling of perceived image quality is of crucial importance, not only in visual arts and commercial applications but also in scientific and entertainment environments. Advances in our understanding of the human visual system offer new possibilities for creating visually superior imaging systems and promise more accurate modeling of image quality. As a result, there is a profusion of new research on imaging performance and perceived quality.