956 resultados para High definition television


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pós-graduação em Televisão Digital: Informação e Conhecimento - FAAC

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This text reflects on the techniques and the process of image production and its artistic composition to digital television in high defini- tion, whereas the 4:3 and 16:9 formats will exist until shutdown of the analog signal. Within this period, consumers of television programming that have analog receivers will continue to see the images in 4:3 format. That fact determines that the productions in high definition, while capturing images with larger viewing area and have high contrast ratio (>1000:1) must be main- tain the elements of visual storytelling within the smallest area and with the contrast ratio of the analog tv (30:1), at the risk of distorting visually the messages produced by the directors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recently, three-dimensional (3D) video has decisively burst onto the entertainment industry scene, and has arrived in households even before the standardization process has been completed. 3D television (3DTV) adoption and deployment can be seen as a major leap in television history, similar to previous transitions from black and white (B&W) to color, from analog to digital television (TV), and from standard definition to high definition. In this paper, we analyze current 3D video technology trends in order to define a taxonomy of the availability and possible introduction of 3D-based services. We also propose an audiovisual network services architecture which provides a smooth transition from two-dimensional (2D) to 3DTV in an Internet Protocol (IP)-based scenario. Based on subjective assessment tests, we also analyze those factors which will influence the quality of experience in those 3D video services, focusing on effects of both coding and transmission errors. In addition, examples of the application of the architecture and results of assessment tests are provided.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The promise of metabonomics, a new "omics" technique, to validate Chinese medicines and the compatibility of Chinese formulas has been appreciated. The present study was undertaken to explore the excretion pattern of low molecular mass metabolites in the male Wistar-derived rat model of kidney yin deficiency induced with thyroxine and reserpine as well as the therapeutic effect of Liu Wei Di Huang Wan (LW) and its separated prescriptions, a classic traditional Chinese medicine formula for treating kidney yin deficiency in China. The study utilized ultra-performance liquid chromatography/electrospray ionization synapt high definition mass spectrometry (UPLC/ESI-SYNAPT-HDMS) in both negative and positive electrospray ionization (ESI). At the same time, blood biochemistry was examined to identify specific changes in the kidney yin deficiency. Distinct changes in the pattern of metabolites, as a result of daily administration of thyroxine and reserpine, were observed by UPLC-HDMS combined with a principal component analysis (PCA). The changes in metabolic profiling were restored to their baseline values after treatment with LW according to the PCA score plots. Altogether, the current metabonomic approach based on UPLC-HDMS and orthogonal projection to latent structures discriminate analysis (OPLS-DA) indicated 20 ions (14 in the negative mode, 8 in the positive mode, and 2 in both) as "differentiating metabolites".

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we describe a method to represent and discover adversarial group behavior in a continuous domain. In comparison to other types of behavior, adversarial behavior is heavily structured as the location of a player (or agent) is dependent both on their teammates and adversaries, in addition to the tactics or strategies of the team. We present a method which can exploit this relationship through the use of a spatiotemporal basis model. As players constantly change roles during a match, we show that employing a "role-based" representation instead of one based on player "identity" can best exploit the playing structure. As vision-based systems currently do not provide perfect detection/tracking (e.g. missed or false detections), we show that our compact representation can effectively "denoise" erroneous detections as well as enabe temporal analysis, which was previously prohibitive due to the dimensionality of the signal. To evaluate our approach, we used a fully instrumented field-hockey pitch with 8 fixed high-definition (HD) cameras and evaluated our approach on approximately 200,000 frames of data from a state-of-the-art real-time player detector and compare it to manually labelled data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes a safety data recording and analysis system that has been developed to capture safety occurrences including precursors using high-definition forward-facing video from train cabs and data from other train-borne systems. The paper describes the data processing model and how events detected through data analysis are related to an underlying socio-technical model of accident causation. The integrated approach to safety data recording and analysis insures systemic factors that condition, influence or potentially contribute to an occurrence are captured both for safety occurrences and precursor events, providing a rich tapestry of antecedent causal factors that can significantly improve learning around accident causation. This can ultimately provide benefit to railways through the development of targeted and more effective countermeasures, better risk models and more effective use and prioritization of safety funds. Level crossing occurrences are a key focus in this paper with data analysis scenarios describing causal factors around near-miss occurrences. The paper concludes with a discussion on how the system can also be applied to other types of railway safety occurrences.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

1.Marine ecosystems provide critically important goods and services to society, and hence their accelerated degradation underpins an urgent need to take rapid, ambitious and informed decisions regarding their conservation and management. 2.The capacity, however, to generate the detailed field data required to inform conservation planning at appropriate scales is limited by time and resource consuming methods for collecting and analysing field data at the large scales required. 3.The ‘Catlin Seaview Survey’, described here, introduces a novel framework for large-scale monitoring of coral reefs using high-definition underwater imagery collected using customized underwater vehicles in combination with computer vision and machine learning. This enables quantitative and geo-referenced outputs of coral reef features such as habitat types, benthic composition, and structural complexity (rugosity) to be generated across multiple kilometre-scale transects with a spatial resolution ranging from 2 to 6 m2. 4.The novel application of technology described here has enormous potential to contribute to our understanding of coral reefs and associated impacts by underpinning management decisions with kilometre-scale measurements of reef health. 5.Imagery datasets from an initial survey of 500 km of seascape are freely available through an online tool called the Catlin Global Reef Record. Outputs from the image analysis using the technologies described here will be updated on the online repository as work progresses on each dataset. 6.Case studies illustrate the utility of outputs as well as their potential to link to information from remote sensing. The potential implications of the innovative technologies on marine resource management and conservation are also discussed, along with the accuracy and efficiency of the methodologies deployed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Due to their unobtrusive nature, vision-based approaches to tracking sports players have been preferred over wearable sensors as they do not require the players to be instrumented for each match. Unfortunately however, due to the heavy occlusion between players, variation in resolution and pose, in addition to fluctuating illumination conditions, tracking players continuously is still an unsolved vision problem. For tasks like clustering and retrieval, having noisy data (i.e. missing and false player detections) is problematic as it generates discontinuities in the input data stream. One method of circumventing this issue is to use an occupancy map, where the field is discretised into a series of zones and a count of player detections in each zone is obtained. A series of frames can then be concatenated to represent a set-play or example of team behaviour. A problem with this approach though is that the compressibility is low (i.e. the variability in the feature space is incredibly high). In this paper, we propose the use of a bilinear spatiotemporal basis model using a role representation to clean-up the noisy detections which operates in a low-dimensional space. To evaluate our approach, we used a fully instrumented field-hockey pitch with 8 fixed high-definition (HD) cameras and evaluated our approach on approximately 200,000 frames of data from a state-of-the-art real-time player detector and compare it to manually labeled data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Current mobile devices and streaming video services support high definition (HD) video, increasing expectation for more contents. HD video streaming generally requires large bandwidth, exerting pressures on existing networks. New generation of video compression codecs, such as VP9 and H.265/HEVC, are expected to be more effective for reducing bandwidth. Existing studies to measure the impact of its compression on users’ perceived quality have not been focused on mobile devices. Here we propose new Quality of Experience (QoE) models that consider both subjective and objective assessments of mobile video quality. We introduce novel predictors, such as the correlations between video resolution and size of coding unit, and achieve a high goodness-of-fit to the collected subjective assessment data (adjusted R-square >83%). The performance analysis shows that H.265 can potentially achieve 44% to 59% bit rate saving compared to H.264/AVC, slightly better than VP9 at 33% to 53%, depending on video content and resolution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Large Display Arrays (LDAs) use Light Emitting Diodes (LEDs) in order to inform a viewing audience. A matrix of individually driven LEDs allows the area represented to display text, images and video. LDAs have undergone rapid development over the past 10 years in both the modular and semi-flexible formats. This thesis critically analyses the communication architecture and processor functionality of current LDAs and presents an alternative method, that is, Scalable Flexible Large Display Arrays (SFLDAs). SFLDAs are more adaptable to a variety of applications because of enhancements in scalability and flexibility. Scalability is the ability to configure SFLDAs from 0.8m2 to 200m2. Flexibility is increased functionality within the processors to handle changes in configuration and the use of a communication architecture that standardises two-way communication throughout the SFLDA. While common video platforms such as Digital Video Interface (DVI), Serial Digital Interface (SDI), and High Definition Multimedia Interface (HDMI) are considered as solutions for the communication architecture of SFLDAs, so too is modulation, fibre optic, capacitive coupling and Ethernet. From an analysis of these architectures, Ethernet was identified as the best solution. The use of Ethernet as the communication architecture in SFLDAs means that both hardware and software modules are capable of interfacing to the SFLDAs. The Video to Ethernet Processor Unit (VEPU), Scoreboard, Image and Control Software (SICS) and Ethernet to LED Processor Unit (ELPU) have been developed to form the key components in designing and implementing the first SFLDA. Data throughput rate and spectrophotometer tests were used to measure the effectiveness of Ethernet within the SFLDA constructs. The result of testing and analysis of these architectures showed that Ethernet satisfactorily met the requirements of SFLDAs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

High-throughput DNA sequencing (HTS) instruments today are capable of generating millions of sequencing reads in a short period of time, and this represents a serious challenge to current bioinformatics pipeline in processing such an enormous amount of data in a fast and economical fashion. Modern graphics cards are powerful processing units that consist of hundreds of scalar processors in parallel in order to handle the rendering of high-definition graphics in real-time. It is this computational capability that we propose to harness in order to accelerate some of the time-consuming steps in analyzing data generated by the HTS instruments. We have developed BarraCUDA, a novel sequence mapping software that utilizes the parallelism of NVIDIA CUDA graphics cards to map sequencing reads to a particular location on a reference genome. While delivering a similar mapping fidelity as other mainstream programs , BarraCUDA is a magnitude faster in mapping throughput compared to its CPU counterparts. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the mapping throughput. BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the mapping of millions of sequencing reads generated by HTS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available at http://seqbarracuda.sf.net

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Evaluating the mechanical properties of rock masses is the base of rock engineering design and construction. It has great influence on the safety and cost of rock project. The recognition is inevitable consequence of new engineering activities in rock, including high-rise building, super bridge, complex underground installations, hydraulic project and etc. During the constructions, lots of engineering accidents happened, which bring great damage to people. According to the investigation, many failures are due to choosing improper mechanical properties. ‘Can’t give the proper properties’ becomes one of big problems for theoretic analysis and numerical simulation. Selecting the properties reasonably and effectively is very significant for the planning, design and construction of rock engineering works. A multiple method based on site investigation, theoretic analysis, model test, numerical test and back analysis by artificial neural network is conducted to determine and optimize the mechanical properties for engineering design. The following outcomes are obtained: (1) Mapping of the rock mass structure Detailed geological investigation is the soul of the fine structure description. Based on statistical window,geological sketch and digital photography,a new method for rock mass fine structure in-situ mapping is developed. It has already been taken into practice and received good comments in Baihetan Hydropower Station. (2) Theoretic analysis of rock mass containing intermittent joints The shear strength mechanisms of joint and rock bridge are analyzed respectively. And the multiple modes of failure on different stress condition are summarized and supplied. Then, through introducing deformation compatibility equation in normal direction, the direct shear strength formulation and compression shear strength formulation for coplanar intermittent joints, as well as compression shear strength formulation for ladderlike intermittent joints are deducted respectively. In order to apply the deducted formulation conveniently in the real projects, a relationship between these formulations and Mohr-Coulomb hypothesis is built up. (3) Model test of rock mass containing intermittent joints Model tests are adopted to study the mechanical mechanism of joints to rock masses. The failure modes of rock mass containing intermittent joints are summarized from the model test. Six typical failure modes are found in the test, and brittle failures are the main failure mode. The evolvement processes of shear stress, shear displacement, normal stress and normal displacement are monitored by using rigid servo test machine. And the deformation and failure character during the loading process is analyzed. According to the model test, the failure modes quite depend on the joint distribution, connectivity and stress states. According to the contrastive analysis of complete stress strain curve, different failure developing stages are found in the intact rock, across jointed rock mass and intermittent jointed rock mass. There are four typical stages in the stress strain curve of intact rock, namely shear contraction stage, linear elastic stage, failure stage and residual strength stage. There are three typical stages in the across jointed rock mass, namely linear elastic stage, transition zone and sliding failure stage. Correspondingly, five typical stages are found in the intermittent jointed rock mass, namely linear elastic stage, sliding of joint, steady growth of post-crack, joint coalescence failure, and residual strength. According to strength analysis, the failure envelopes of intact rock and across jointed rock mass are the upper bound and lower bound separately. The strength of intermittent jointed rock mass can be evaluated by reducing the bandwidth of the failure envelope with geo-mechanics analysis. (4) Numerical test of rock mass Two sets of methods, i.e. the distinct element method (DEC) based on in-situ geology mapping and the realistic failure process analysis (RFPA) based on high-definition digital imaging, are developed and introduced. The operation process and analysis results are demonstrated detailedly from the research on parameters of rock mass based on numerical test in the Jinping First Stage Hydropower Station and Baihetan Hydropower Station. By comparison,the advantages and disadvantages are discussed. Then the applicable fields are figured out respectively. (5) Intelligent evaluation based on artificial neural network (ANN) The characters of both ANN and parameter evaluation of rock mass are discussed and summarized. According to the investigations, ANN has a bright application future in the field of parameter evaluation of rock mass. Intelligent evaluation of mechanical parameters in the Jinping First Stage Hydropower Station is taken as an example to demonstrate the analysis process. The problems in five aspects, i. e. sample selection, network design, initial value selection, learning rate and expected error, are discussed detailedly.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent years have witnessed a rapid growth in the demand for streaming video over the Internet, exposing challenges in coping with heterogeneous device capabilities and varying network throughput. When we couple this rise in streaming with the growing number of portable devices (smart phones, tablets, laptops) we see an ever-increasing demand for high-definition videos online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide us with graceful changes in video quality, all while respecting our viewing satisfaction. In this context the use of well-known scalable media streaming techniques, commonly known as scalable coding, is an attractive solution and the focus of this thesis. In this thesis we investigate the transmission of existing scalable video models over a lossy network and determine how the variation in viewable quality is affected by packet loss. This work focuses on leveraging the benefits of scalable media, while reducing the effects of data loss on achievable video quality. The overall approach is focused on the strategic packetisation of the underlying scalable video and how to best utilise error resiliency to maximise viewable quality. In particular, we examine the manner in which scalable video is packetised for transmission over lossy networks and propose new techniques that reduce the impact of packet loss on scalable video by selectively choosing how to packetise the data and which data to transmit. We also exploit redundancy techniques, such as error resiliency, to enhance the stream quality by ensuring a smooth play-out with fewer changes in achievable video quality. The contributions of this thesis are in the creation of new segmentation and encapsulation techniques which increase the viewable quality of existing scalable models by fragmenting and re-allocating the video sub-streams based on user requirements, available bandwidth and variations in loss rates. We offer new packetisation techniques which reduce the effects of packet loss on viewable quality by leveraging the increase in the number of frames per group of pictures (GOP) and by providing equality of data in every packet transmitted per GOP. These provide novel mechanisms for packetizing and error resiliency, as well as providing new applications for existing techniques such as Interleaving and Priority Encoded Transmission. We also introduce three new scalable coding models, which offer a balance between transmission cost and the consistency of viewable quality.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper starts presents the work initially carried out by Queen's University and RSRE (now Qinetiq) in the development of advanced architectures and microchips based on systolic array architectures. The paper outlines how this has led to the development of highly complex designs for high definition TV and highlights work both on advanced signal processing architectures and tool flows for advanced systems. © 2006 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The initial part of this paper reviews the early challenges (c 1980) in achieving real-time silicon implementations of DSP computations. In particular, it discusses research on application specific architectures, including bit level systolic circuits that led to important advances in achieving the DSP performance levels then required. These were many orders of magnitude greater than those achievable using programmable (including early DSP) processors, and were demonstrated through the design of commercial digital correlator and digital filter chips. As is discussed, an important challenge was the application of these concepts to recursive computations as occur, for example, in Infinite Impulse Response (IIR) filters. An important breakthrough was to show how fine grained pipelining can be used if arithmetic is performed most significant bit (msb) first. This can be achieved using redundant number systems, including carry-save arithmetic. This research and its practical benefits were again demonstrated through a number of novel IIR filter chip designs which at the time, exhibited performance much greater than previous solutions. The architectural insights gained coupled with the regular nature of many DSP and video processing computations also provided the foundation for new methods for the rapid design and synthesis of complex DSP System-on-Chip (SoC), Intellectual Property (IP) cores. This included the creation of a wide portfolio of commercial SoC video compression cores (MPEG2, MPEG4, H.264) for very high performance applications ranging from cell phones to High Definition TV (HDTV). The work provided the foundation for systematic methodologies, tools and design flows including high-level design optimizations based on "algorithmic engineering" and also led to the creation of the Abhainn tool environment for the design of complex heterogeneous DSP platforms comprising processors and multiple FPGAs. The paper concludes with a discussion of the problems faced by designers in developing complex DSP systems using current SoC technology. © 2007 Springer Science+Business Media, LLC.