956 resultados para High definition television
Resumo:
OBJECTIVES: To evaluate high-definition and conventional oscillometry in comparison with direct blood pressure measurements in anaesthetised dogs. METHODS: Eight simultaneous readings for systolic, diastolic and mean pressure were obtained directly and with each of two devices in nine anaesthetised dogs. Measurement procedure and validation were based on the 2007 ACVIM guidelines. RESULTS: Sixty-three simultaneous readings were evaluated for each device and direct measurements. The mean differences (bias) to direct values were within 10 mmHg for both devices although bias for systolic and diastolic blood pressures was higher for Memodiagnostic. The standard deviations of differences (precision) were within 15 mmHg for Dinamap but exceeded for Memodiagnostic. Correlation coefficients were higher for Dinamap than Memodiagnostic but both failed to reach a correlation of 0.9. Over 50% of values lay within 10 mmHg of direct measures for both devices, but this percentage was greater for Dinamap than Memodiagnostic. Over 80% of values lay within 20 mmHg of direct measures for Dinamap but not for Memodiagnostic. CLINICAL SIGNIFICANCE: Both devices failed to meet ACVIM guideline validation. However, Dinamap only failed with regards to correlation. Memodiagnostic failed on several requirements, and based on poor correlation, accuracy and precision, this device cannot be currently recommended for dogs under anaesthesia.
Resumo:
A new generation of high definition computed tomography (HDCT) 64-slice devices complemented by a new iterative image reconstruction algorithm-adaptive statistical iterative reconstruction, offer substantially higher resolution compared to standard definition CT (SDCT) scanners. As high resolution confers higher noise we have compared image quality and radiation dose of coronary computed tomography angiography (CCTA) from HDCT versus SDCT. Consecutive patients (n = 93) underwent HDCT, and were compared to 93 patients who had previously undergone CCTA with SDCT matched for heart rate (HR), HR variability and body mass index (BMI). Tube voltage and current were adapted to the patient's BMI, using identical protocols in both groups. The image quality of all CCTA scans was evaluated by two independent readers in all coronary segments using a 4-point scale (1, excellent image quality; 2, blurring of the vessel wall; 3, image with artefacts but evaluative; 4, non-evaluative). Effective radiation dose was calculated from DLP multiplied by a conversion factor (0.014 mSv/mGy × cm). The mean image quality score from HDCT versus SDCT was comparable (2.02 ± 0.68 vs. 2.00 ± 0.76). Mean effective radiation dose did not significantly differ between HDCT (1.7 ± 0.6 mSv, range 1.0-3.7 mSv) and SDCT (1.9 ± 0.8 mSv, range 0.8-5.5 mSv; P = n.s.). HDCT scanners allow low-dose 64-slice CCTA scanning with higher resolution than SDCT but maintained image quality and equally low radiation dose. Whether this will translate into higher accuracy of HDCT for CAD detection remains to be evaluated.
Resumo:
Multimedia distribution through wireless networks in the home environment presents a number of advantages which have fueled the interest of industry in recent years, such as simple connectivity and data delivery to a variety of devices. Together with High-Definition (HD) contents, multimedia wireless networks have been proposed for several applications, such as IPTV and Digital TV distribution for multiple devices in the home environment. For these scenarios, we propose a multicast distribution system for High-Definition video over 802.11 wireless networks based on rate-limited packet retransmission. We develop a limited rate ARQ system that retransmits packets according to the importance of their content (prioritization scheme) and according to their delay limitations (delay control). The performance of our proposed ARQ system is evaluated and compared with a similarly rate-limited ARQ algorithm. The results show a higher packet recovery rate and improvements in video quality for our proposed system.
Resumo:
Métrica de calidad de video de alta definición construida a partir de ratios de referencia completa. La medida de calidad de video, en inglés Visual Quality Assessment (VQA), es uno de los mayores retos por solucionar en el entorno multimedia. La calidad de vídeo tiene un impacto altísimo en la percepción del usuario final (consumidor) de los servicios sustentados en la provisión de contenidos multimedia y, por tanto, factor clave en la valoración del nuevo paradigma denominado Calidad de la Experiencia, en inglés Quality of Experience (QoE). Los modelos de medida de calidad de vídeo se pueden agrupar en varias ramas según la base técnica que sustenta el sistema de medida, destacando en importancia los que emplean modelos psicovisuales orientados a reproducir las características del sistema visual humano, en inglés Human Visual System, del que toman sus siglas HVS, y los que, por el contrario, optan por una aproximación ingenieril en la que el cálculo de calidad está basado en la extracción de parámetros intrínsecos de la imagen y su comparación. A pesar de los avances recogidos en este campo en los últimos años, la investigación en métricas de calidad de vídeo, tanto en presencia de referencia (los modelos denominados de referencia completa), como en presencia de parte de ella (modelos de referencia reducida) e incluso los que trabajan en ausencia de la misma (denominados sin referencia), tiene un amplio camino de mejora y objetivos por alcanzar. Dentro de ellos, la medida de señales de alta definición, especialmente las utilizadas en las primeras etapas de la cadena de valor que son de muy alta calidad, son de especial interés por su influencia en la calidad final del servicio y no existen modelos fiables de medida en la actualidad. Esta tesis doctoral presenta un modelo de medida de calidad de referencia completa que hemos llamado PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), basado en la ponderación de cuatro ratios de calidad calculados a partir de características intrínsecas de la imagen. Son: El Ratio de Fidelidad, calculado mediante el gradiente morfológico o gradiente de Beucher. El Ratio de Similitud Visual, calculado mediante los puntos visualmente significativos de la imagen a través de filtrados locales de contraste. El Ratio de Nitidez, que procede de la extracción del estadístico de textura de Haralick contraste. El Ratio de Complejidad, obtenido de la definición de homogeneidad del conjunto de estadísticos de textura de Haralick PARMENIA presenta como novedad la utilización de la morfología matemática y estadísticos de Haralick como base de una métrica de medida de calidad, pues esas técnicas han estado tradicionalmente más ligadas a la teledetección y la segmentación de objetos. Además, la aproximación de la métrica como un conjunto ponderado de ratios es igualmente novedosa debido a que se alimenta de modelos de similitud estructural y otros más clásicos, basados en la perceptibilidad del error generado por la degradación de la señal asociada a la compresión. PARMENIA presenta resultados con una altísima correlación con las valoraciones MOS procedentes de las pruebas subjetivas a usuarios que se han realizado para la validación de la misma. El corpus de trabajo seleccionado procede de conjuntos de secuencias validados internacionalmente, de modo que los resultados aportados sean de la máxima calidad y el máximo rigor posible. La metodología de trabajo seguida ha consistido en la generación de un conjunto de secuencias de prueba de distintas calidades a través de la codificación con distintos escalones de cuantificación, la obtención de las valoraciones subjetivas de las mismas a través de pruebas subjetivas de calidad (basadas en la recomendación de la Unión Internacional de Telecomunicaciones BT.500), y la validación mediante el cálculo de la correlación de PARMENIA con estos valores subjetivos, cuantificada a través del coeficiente de correlación de Pearson. Una vez realizada la validación de los ratios y optimizada su influencia en la medida final y su alta correlación con la percepción, se ha realizado una segunda revisión sobre secuencias del hdtv test dataset 1 del Grupo de Expertos de Calidad de Vídeo (VQEG, Video Quality Expert Group) mostrando los resultados obtenidos sus claras ventajas. Abstract Visual Quality Assessment has been so far one of the most intriguing challenges on the media environment. Progressive evolution towards higher resolutions while increasing the quality needed (e.g. high definition and better image quality) aims to redefine models for quality measuring. Given the growing interest in multimedia services delivery, perceptual quality measurement has become a very active area of research. First, in this work, a classification of objective video quality metrics based on their underlying methodologies and approaches for measuring video quality has been introduced to sum up the state of the art. Then, this doctoral thesis describes an enhanced solution for full reference objective quality measurement based on mathematical morphology, texture features and visual similarity information that provides a normalized metric that we have called PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), with a high correlated MOS score. The PARMENIA metric is based on the pooling of different quality ratios that are obtained from three different approaches: Beucher’s gradient, local contrast filtering, and contrast and homogeneity Haralick’s texture features. The metric performance is excellent, and improves the current state of the art by providing a wide dynamic range that make easier to discriminate between very close quality coded sequences, especially for very high bit rates whose quality, currently, is transparent for quality metrics. PARMENIA introduces a degree of novelty against other working metrics: on the one hand, exploits the structural information variation to build the metric’s kernel, but complements the measure with texture information and a ratio of visual meaningful points that is closer to typical error sensitivity based approaches. We would like to point out that PARMENIA approach is the only metric built upon full reference ratios, and using mathematical morphology and texture features (typically used in segmentation) for quality assessment. On the other hand, it gets results with a wide dynamic range that allows measuring the quality of high definition sequences from bit rates of hundreds of Megabits (Mbps) down to typical distribution rates (5-6 Mbps), even streaming rates (1- 2 Mbps). Thus, a direct correlation between PARMENIA and MOS scores are easily constructed. PARMENIA may further enhance the number of available choices in objective quality measurement, especially for very high quality HD materials. All this results come from validation that has been achieved through internationally validated datasets on which subjective tests based on ITU-T BT.500 methodology have been carried out. Pearson correlation coefficient has been calculated to verify the accuracy of PARMENIA and its reliability.
Resumo:
Resumo:
Background and aim: The usefulness of high definition colonoscopy plus i-scan (HD+i-SCAN) for average-risk colorectal cancer screening has not been fully assessed. The detection rate of adenomas and other measurements such as the number of adenomas per colonoscopy and the flat adenoma detection rate have been recognized as markers of colonoscopy quality. The aim of the present study was to compare the diagnostic performance of an HD+i-SCAN with that of standard resolution white-light colonoscope. Methods: This is a retrospective analysis of a prospectively collected screening colonoscopy database. A comparative analysis of the diagnostic yield of an HD+i-SCAN or standard resolution colonoscopy for average-risk colorectal screening was conducted. Results: During the period of study, 155/163 (95.1%) patients met the inclusion criteria. The mean age was 56.9 years. Sixty of 155 (39%) colonoscopies were performed using a HD+i-SCAN. Adenoma-detection-rates during the withdrawal of the standard resolution versus HD+i-SCAN colonoscopies were 29.5% and 30% (p = n.s.). Adenoma/colonoscopy values for standard resolution versus HD+i-SCAN colonoscopies were 0.46 (SD = 0.9) and 0.72 (SD = 1.3) (p = n.s.). A greater number of flat adenomas were detected in the HD+i-SCAN group (6/60 vs. 2/95) (p < .05). Likewise, serrated adenomas/polyps per colonoscopy were also higher in the HD+i-SCAN group. Conclusions: A HD+i-SCAN colonoscopy increases the flat adenoma detection rate and serrated adenomas/polyps per colonoscopy compared to a standard colonoscopy in average-risk screening population. HD+i-SCAN is a simple, available procedure that can be helpful, even for experienced providers. The performance of HD+i-SCAN and substantial prevalence of flat lesions in our average-risk screening cohort support its usefulness in improving the efficacy of screening colonoscopies.
Resumo:
The widespread availability and demand for multimedia capable devices and multimedia content have fueled the need for high-speed wireless connectivity beyond the capabilities of existing commercial standards. While fiber optic data transfer links can provide multigigabit- per-second data rates, cost and deployment are often prohibitive in many applications. Wireless links, on the contrary, can provide a cost-effective fiber alternative to interconnect the outlining areas beyond the reach of the fiber rollout. With this in mind, the ever increasing demand for multi-gigabit wireless applications, fiber segment replacement mobile backhauling and aggregation, and covering the last mile have posed enormous challenges for next generation wireless technologies. In particular, the unbalanced temporal and geographical variations of spectrum usage along with the rapid proliferation of bandwidth- hungry mobile applications, such as video streaming with high definition television (HDTV) and ultra-high definition video (UHDV), have inspired millimeter-wave (mmWave) communications as a promising technology to alleviate the pressure of scarce spectrum resources for fifth generation (5G) mobile broadband.
Resumo:
This paper presents a critical analysis of ultrawideband (UWB) and considers the turbulent journey it has had from the Federal Communications Commission's bandwidth allocation in 2002 to today. It analyzes the standards, the standoffs, and the stalemate in standardization activities and investigates the past and present research and commercial activities in realizing the UWB dream. In this paper, statistical evidence is presented to depict UWB's changing fortunes and is utilized as an indicator of future prominence. This paper reviews some of the opinions and remarks from commentators and analyzes predictions that were made. Finally, it presents possible ways forward to reignite the high-data-rate UWB standardization pursuit.
Resumo:
This work focuses on the study of video compression standard MPEG. To this end, a study was undertaken starting from the basics of digital video, addressing the components necessary for the understanding of the tools used by the video coding standard MPEG. The Motion Picture Experts Group (MPEG) was formed in the late '80s by a group of experts in order to create international standards for encoding and decoding audio and video. This paper will discuss the techniques present in the video compression standard MPEG, as well as its evolution. Will be described in the MPEG-1, MPEG-2, MPEG-4 and H.264 (MPEG-4 Part 10), however, the last two will be presented with more emphasis, because the standards are present in most modern video technologies, as in HDTV broadcasts
Resumo:
This paper presents a comparison among different consumer 3D display technologies by means of a subjective assessment test. Therefore, four 55-in displays have been considered: one autostereoscopic display, one stereoscopic with polarized passive glasses, and two with active shutter glasses. In addition, a high-quality 3D video database has been used to show diverse material with both views in high definition. To carry out the test, standard recommendations have been followed considering also some modifications looking for a test environment more similar to real home viewing conditions, with the objective of obtaining more representative conclusions. Moreover, several perceptual factors have been considered to study the performance of the displays, such as picture quality, depth perception, and visual discomfort. The obtained results show interesting issues, like the performance improvement of active shutter glasses technology, the high performance of the polarized glasses technology in terms of quality and comfort, and the need of improvement of the autostereoscopic displays to complement the visual comfort to reach a global high-quality visual experience.
Resumo:
Scalable high-resolution tiled display walls are becoming increasingly important to decision makers and researchers because high pixel counts in combination with large screen areas facilitate content rich, simultaneous display of computer-generated visualization information and high-definition video data from multiple sources. This tutorial is designed to cater for new users as well as researchers who are currently operating tiled display walls or 'OptiPortals'. We will discuss the current and future applications of display wall technology and explore opportunities for participants to collaborate and contribute in a growing community. Multiple tutorial streams will cover both hands-on practical development, as well as policy and method design for embedding these technologies into the research process. Attendees will be able to gain an understanding of how to get started with developing similar systems themselves, in addition to becoming familiar with typical applications and large-scale visualisation techniques. Presentations in this tutorial will describe current implementations of tiled display walls that highlight the effective usage of screen real-estate with various visualization datasets, including collaborative applications such as visualcasting, classroom learning and video conferencing. A feature presentation for this tutorial will be given by Jurgen Schulze from Calit2 at the University of California, San Diego. Jurgen is an expert in scientific visualization in virtual environments, human-computer interaction, real-time volume rendering, and graphics algorithms on programmable graphics hardware.
Resumo:
This chapter explores the distinctive qualities of the Matt Smith era Doctor Who, focusing on how dramatic emphases are connected with emphases on visual style, and how this depends on the programme's production methods and technologies. Doctor Who was first made in the 1960s era of live, studio-based, multi-camera television with monochrome pictures. However, as technical innovations like colour filming, stereo sound, CGI and post-production effects technology have been routinely introduced into the programme, and now High Definition (HD) cameras, they have given Doctor Who’s creators new ways of making visually distinctive narratives. Indeed, it has been argued that since the 1980s television drama has become increasingly like cinema in its production methods and aesthetic aims. Viewers’ ability to view the programme on high-specification TV sets, and to record and repeat episodes using digital media, also encourage attention to visual style in television as much as in cinema. The chapter evaluates how these new circumstances affect what Doctor Who has become and engages with arguments that visual style has been allowed to override characterisation and story in the current Doctor Who. The chapter refers to specific episodes, and frames the analysis with reference to earlier years in Doctor Who’s long history. For example, visual spectacle using green-screen and CGI can function as a set-piece (at the opening or ending of an episode) but can also work ‘invisibly’ to render a setting realistically. Shooting on location using HD cameras provides a rich and detailed image texture, but also highlights mistakes and especially problems of lighting. The reduction of Doctor Who’s budget has led to Steven Moffat’s episodes relying less on visual extravagance, connecting back both to Russell T. Davies’s concern to show off the BBC’s investment in the series but also to reference British traditions of gritty and intimate social drama. Pressures to capitalise on Doctor Who as a branded product are the final aspect of the chapter’s analysis, where the role of Moffat as ‘showrunner’ links him to an American (not British) style of television production where the preservation of format and brand values give him unusual power over the look of the series.
Resumo:
Pós-graduação em Comunicação - FAAC