813 resultados para video-otoscopy
Resumo:
The Internet as a video distribution medium has seen a tremendous growth in recent years. Currently, the transmission of major live events and TV channels over the Internet can easily reach hundreds or millions of users trying to receive the same content using very distinct receiver terminals, placing both scalability and heterogeneity challenges to content and network providers. In private and well-managed Internet Protocol (IP) networks these types of distributions are supported by specially designed architectures, complemented with IP Multicast protocols and Quality of Service (QoS) solutions. However, the Best-Effort and Unicast nature of the Internet requires the introduction of a new set of protocols and related architectures to support the distribution of these contents. In the field of file and non-real time content distributions this has led to the creation and development of several Peer-to-Peer protocols that have experienced great success in recent years. This chapter presents the current research and developments in Peer-to-Peer video streaming over the Internet. A special focus is made on peer protocols, associated architectures and video coding techniques. The authors also review and describe current Peer-to-Peer streaming solutions. © 2013, IGI Global.
Resumo:
The Joint Video Team, composed by the ISO/IEC Moving Picture Experts Group (MPEG) and the ITU-T Video Coding Experts Group (VCEG), has standardized a scalable extension of the H.264/AVC video coding standard called Scalable Video Coding (SVC). H.264/SVC provides scalable video streams which are composed by a base layer and one or more enhancement layers. Enhancement layers may improve the temporal, the spatial or the signal-to-noise ratio resolutions of the content represented by the lower layers. One of the applications, of this standard is related to video transmission in both wired and wireless communication systems, and it is therefore important to analyze in which way packet losses contribute to the degradation of quality, and which mechanisms could be used to improve that quality. This paper provides an analysis and evaluation of H.264/SVC in error prone environments, quantifying the degradation caused by packet losses in the decoded video. It also proposes and analyzes the consequences of QoS-based discarding of packets through different marking solutions.
Resumo:
The number of software applications available on the Internet for distributing video streams in real time over P2P networks has grown quickly in the last two years. Typical this kind of distribution is made by television channel broadcasters which try to make their content globally available, using viewer's resources to support a large scale distribution of video without incurring in incremental costs. However, the lack of adaptation in video quality, combined with the lack of a standard protocol for this kind of multimedia distribution has driven content providers to basically ignore it as a solution for video delivery over the Internet. While the scalable extension of the H. 264 encoding (H.264/SVC) can be used to support terminal and network heterogeneity, it is not clear how it can be integrated in a P2P overlay to form a large scale and real time distribution. In this paper, we start by defining a solution that combines the most popular P2P file-sharing protocol, the BitTorrent, with the H. 264/SVC encoding for a real-time video content delivery. Using this solution we then evaluate the effect of several parameters in the quality received by peers.
Resumo:
We compare the effect of different text segmentation strategies on speech based passage retrieval of video. Passage retrieval has mainly been studied to improve document retrieval and to enable question answering. In these domains best results were obtained using passages defined by the paragraph structure of the source documents or by using arbitrary overlapping passages. For the retrieval of relevant passages in a video, using speech transcripts, no author defined segmentation is available. We compare retrieval results from 4 different types of segments based on the speech channel of the video: fixed length segments, a sliding window, semantically coherent segments and prosodic segments. We evaluated the methods on the corpus of the MediaEval 2011 Rich Speech Retrieval task. Our main conclusion is that the retrieval results highly depend on the right choice for the segment length. However, results using the segmentation into semantically coherent parts depend much less on the segment length. Especially, the quality of fixed length and sliding window segmentation drops fast when the segment length increases, while quality of the semantically coherent segments is much more stable. Thus, if coherent segments are defined, longer segments can be used and consequently less segments have to be considered at retrieval time.
Resumo:
Tese de doutoramento, Informática (Engenharia Informática), Universidade de Lisboa, Faculdade de Ciências, 2014
Resumo:
Tese de doutoramento, Geologia (Geodinâmica Externa), Universidade de Lisboa, Faculdade de Ciências, 2014
Resumo:
Thesis (Ph.D.)--University of Washington, 2015
Resumo:
This practice-based PhD is comprised of two interrelated elements: (i) ‘(un)childhood’, a 53’ video-essay shown on two screens; and (ii) a 58286 word written thesis. The project, which is contextualised within the tradition of artists working with their own children on time-based art projects, explores a new approach to timebased artistic work about childhood. While Stan Brakhage (1933-2003), Ernie Gher (1943-), Erik Bullot (1963-) and Mary Kelly (1941-) all documented, photographed and filmed their children over a period of years to produce art projects (experimental films and a time-based installation), these projects were implicitly underpinned by a construction of childhood in which children, shown as they grow, represent the abstract primitive subject. The current project challenges the convention of representing children entirely from the adult’s point of view, as aesthetic objects without a voice, as well as through the artist’s chronological approach to time. Instead, this project focuses on the relational joining of the child’s and adult’s points of view. The artist worked on a video project with her own son over a four-and-a-half year period (between the ages of 5 and 10) through which she developed her ‘relational video-making’ methodology. The video-essay (un)childhood performs the relational voices of childhood as resulting from the verbal interactions of both children and adults. The non-chronological nature of(un)childhood offers an alternative to the linear-temporal approach to the representation of childhood. Through montage and a number of literal allusions to time in its dialogue, (un)childhood performs the relational times of childhood by combining children’s lives in the present with the temporal dimensions that have traditionally constructed childhood: past, future and timeless.
Resumo:
Data registration refers to a series of techniques for matching or bringing similar objects or datasets together into alignment. These techniques enjoy widespread use in a diverse variety of applications, such as video coding, tracking, object and face detection and recognition, surveillance and satellite imaging, medical image analysis and structure from motion. Registration methods are as numerous as their manifold uses, from pixel level and block or feature based methods to Fourier domain methods. This book is focused on providing algorithms and image and video techniques for registration and quality performance metrics. The authors provide various assessment metrics for measuring registration quality alongside analyses of registration techniques, introducing and explaining both familiar and state–of–the–art registration methodologies used in a variety of targeted applications.
Resumo:
Rapid developments in display technologies, digital printing, imaging sensors, image processing and image transmission are providing new possibilities for creating and conveying visual content. In an age in which images and video are ubiquitous and where mobile, satellite, and three-dimensional (3-D) imaging have become ordinary experiences, quantification of the performance of modern imaging systems requires appropriate approaches. At the end of the imaging chain, a human observer must decide whether images and video are of a satisfactory visual quality. Hence the measurement and modeling of perceived image quality is of crucial importance, not only in visual arts and commercial applications but also in scientific and entertainment environments. Advances in our understanding of the human visual system offer new possibilities for creating visually superior imaging systems and promise more accurate modeling of image quality. As a result, there is a profusion of new research on imaging performance and perceived quality.
Resumo:
General simulated scenes These scenes followed a pre-defined script (see the Thesis for details), with common movements corresponding to general experiments. People go to or stand still in front of "J9", and/or go to the side of Argonauta reactor and come back again. The first type of movement is common during Irradiation experiments, where a material sample is put within the "J9" channel; and also during neutrongraphy or gammagraphy experiments, where a sample is placed in front of "J9". Here, the detailed movements of putting samples on these places were not reproduced in details, but only the whole bodies' movements were simulated (as crouching or being still in front of "J9"). The second type of movement may occur when operators go to the side of Argonauta to verify some operational condition. - Scene 1 (Obs.: Scene 1 of the "General simulated scenes" class): Comprises one of the scenes with two persons. Both of them use clothes of light colors. Both persons remain still in front of "J9"; one goes to the computer and then come back, and both go out. Video file labels: "20140326145315_IPCAM": recorded by the right camera,
Resumo:
General simulated scenes These scenes followed a pre-defined script (see the Thesis for details), with common movements corresponding to general experiments. People go to or stand still in front of "J9", and/or go to the side of Argonauta reactor and come back again. The first type of movement is common during Irradiation experiments, where a material sample is put within the "J9" channel; and also during neutrongraphy or gammagraphy experiments, where a sample is placed in front of "J9". Here, the detailed movements of putting samples on these places were not reproduced in details, but only the whole bodies' movements were simulated (as crouching or being still in front of "J9"). The second type of movement may occur when operators go to the side of Argonauta to verify some operational condition. - Scene 1 (Obs.: Scene 1 of the "General simulated scenes" class): Comprises one of the scenes with two persons. Both of them use clothes of light colors. Both persons remain still in front of "J9"; one goes to the computer and then come back, and both go out. Video file labels: "20140326145316_IPCAM": recorded by the left camera.
Resumo:
Scenes for Spectrography experiment Scenes were recorded following the tasks involved in spectrography experiments, which are carried out in front of "J9" output radiadion channel, the latter in open condition. These tasks may be executed by one or two persons. One person can do the tasks, but requiring him to crouch in front of "J9" to adjust the angular position the experimental appartus (a crystal to bend the neutron radiation to the spectograph), and then to get up to verify data in a computer aside; these movements are repeated until achieving the right operational conditions. Two people may aid one another in such a way one remais crouched while the other remains still in front of the computer. They may also interchange tasks so as to divide received doses. Up to now, there are available two scenes with one person and one scene with two persons. These scenes are described in the sequel: - Scene 1: Comprises one of the scenes with one person performing spectography experiment. Video file labels: "20140327181335_IPCAM": recorded by the right camera
Resumo:
General simulated scenes These scenes followed a pre-defined script (see the Thesis for details), with common movements corresponding to general experiments. People go to or stand still in front of "J9", and/or go to the side of Argonauta reactor and come back again. The first type of movement is common during Irradiation experiments, where a material sample is put within the "J9" channel; and also during neutrongraphy or gammagraphy experiments, where a sample is placed in front of "J9". Here, the detailed movements of putting samples on these places were not reproduced in details, but only the whole bodies' movements were simulated (as crouching or being still in front of "J9"). The second type of movement may occur when operators go to the side of Argonauta to verify some operational condition. - Scene 2: Comprises one of the scenes with two persons. Both of them use clothes of dark colors. Both persons go to the side of Argonauta reactor and then come back and go out. Video file labels: "20140326154754_IPCAM": recorded by the right camera.
Resumo:
Scenes for Spectrography experiment Scenes were recorded following the tasks involved in spectrography experiments, which are carried out in front of "J9" output radiadion channel, the latter in open condition. These tasks may be executed by one or two persons. One person can do the tasks, but requiring him to crouch in front of "J9" to adjust the angular position the experimental appartus (a crystal to bend the neutron radiation to the spectograph), and then to get up to verify data in a computer aside; these movements are repeated until achieving the right operational conditions. Two people may aid one another in such a way one remais crouched while the other remains still in front of the computer. They may also interchange tasks so as to divide received doses. Up to now, there are available two scenes with one person and one scene with two persons. These scenes are described in the sequel: - Scene 1: Comprises one of the scenes with one person performing spectography experiment. Video file labels:"20140327181336_IPCAM": recorded by the left camera.