794 resultados para Scalable Video Codec
Resumo:
One of the challenges in scientific visualization is to generate software libraries suitable for the large-scale data emerging from tera-scale simulations and instruments. We describe the efforts currently under way at SDSC and NPACI to address these challenges. The scope of the SDSC project spans data handling, graphics, visualization, and scientific application domains. Components of the research focus on the following areas: intelligent data storage, layout and handling, using an associated “Floor-Plan” (meta data); performance optimization on parallel architectures; extension of SDSC’s scalable, parallel, direct volume renderer to allow perspective viewing; and interactive rendering of fractional images (“imagelets”), which facilitates the examination of large datasets. These concepts are coordinated within a data-visualization pipeline, which operates on component data blocks sized to fit within the available computing resources. A key feature of the scheme is that the meta data, which tag the data blocks, can be propagated and applied consistently. This is possible at the disk level, in distributing the computations across parallel processors; in “imagelet” composition; and in feature tagging. The work reflects the emerging challenges and opportunities presented by the ongoing progress in high-performance computing (HPC) and the deployment of the data, computational, and visualization Grids.
Resumo:
We present a scheme which offers a significant reduction in the resources required to implement linear optics quantum computing. The scheme is a variation of the proposal of Knill, Laflamme and Milburn, and makes use of an incremental approach to the error encoding to boost probability of success.
Resumo:
Background: EUS is being increasingly utilized for the diagnosis of choledocholithiasis and microlithiasis, especially in patients with biliary colic. Simultaneously, there is also a rising interest in the use of EUS for therapeutic interventions. Objectives: Our goal was to assess the effectiveness of EUS-directed common bile duct (CBD) stone removal to compare its safety and effectiveness with ERCP-directed intervention. Design: interim results of a prospective, randomized, single-center blinded clinical trial. Setting: A single tertiary care referral center. Patients: Fifty-two patients with uncomplicated CBD stones were prospectively randomized to CBD cannulation and stone removal under EUS or ERCP guidance. Main Outcome Measurements and Interventions: Primary outcome measure was the rate of successful cannulation of the CBD. Secondary Outcome measures included Successful removal of stones and overall complication rates. Results: CBD cannulation followed by stone extraction was successful in 23 of 26 patients (88.5%) in the EUS group (1) versus 25 of 26 patients (96.2%) in the ERCP group (11) (95% CI, -27.65%, 9.88%). Overall, there were 3 complications in the EUS group and 4 complications in the ERCP group. Limitation: The current study is an interim report from a single center report and performed by a single operator. Conclusions: Our preliminary analysis indicates that Outcomes following EUS-guided CBD stone retrieval are equivalent to those following ERCP EUS-related adverse events are similar to those following ERCP. ERCP and EUS-guided stone retrieval appears to be equally effective for therapeutic interventions of the bile duct. Additional studies are required to validate these preliminary results and to determine predictors of success of EUS-guided stone removal. (Gastrointest Endosc 2009;69:238-43.)
Resumo:
A new method is presented to determine an accurate eigendecomposition of difficult low temperature unimolecular master equation problems. Based on a generalisation of the Nesbet method, the new method is capable of achieving complete spectral resolution of the master equation matrix with relative accuracy in the eigenvectors. The method is applied to a test case of the decomposition of ethane at 300 K from a microcanonical initial population with energy transfer modelled by both Ergodic Collision Theory and the exponential-down model. The fact that quadruple precision (16-byte) arithmetic is required irrespective of the eigensolution method used is demonstrated. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
If the Internet could be used as a method of transmitting ultrasound images taken in the field quickly and effectively, it would bring tertiary consultation to even extremely remote centres. The aim of the study was to evaluate the maximum degree of compression of fetal ultrasound video-recordings that would not compromise signal quality. A digital fetal ultrasound videorecording of 90 s was produced, resulting in a file size of 512 MByte. The file was compressed to 2, 5 and 10 MByte. The recordings were viewed by a panel of four experienced observers who were blinded to the compression ratio used. Using a simple seven-point scoring system, the observers rated the quality of the clip on 17 items. The maximum compression ratio that was considered clinically acceptable was found to be 1:50-1:100. This produced final file sizes of 5-10 MByte, corresponding to a screen size of 320 x 240 pixels, running at 15 frames/s. This study expands the possibilities for providing tertiary perinatal services to the wider community.
Resumo:
The aim of this experiment was to determine the effectiveness of two video-based perceptual training approaches designed to improve the anticipatory skills of junior tennis players. Players were assigned equally to an explicit learning group, an implicit learning group, a placebo group or a control group. A progressive temporal occlusion paradigm was used to examine, before and after training, the ability of the players to predict the direction of an opponent's service in an in-vivo on-court setting. The players responded either through hitting a return stroke or making a verbal prediction of stroke direction. Results revealed that the implicit learning group, whose training required them to predict serve speed direction while viewing temporally occluded video footage of the return-of-serve scenario, significantly improved their prediction accuracy after the training intervention. However, this training effect dissipated after a 32 day unfilled retention interval. The explicit learning group, who received instructions about the specific aspects of the pre-contact service kinematics that are informative with respect to service direction, did not demonstrate any significant performance improvements after the intervention. This, together with the absence of any significant improvements for the placebo and control groups, demonstrated that the improvement observed for the implicit learning group was not a consequence of either expectancy or familiarity effects.
Resumo:
In this paper we propose a second linearly scalable method for solving large master equations arising in the context of gas-phase reactive systems. The new method is based on the well-known shift-invert Lanczos iteration using the GMRES iteration preconditioned using the diffusion approximation to the master equation to provide the inverse of the master equation matrix. In this way we avoid the cubic scaling of traditional master equation solution methods while maintaining the speed of a partial spectral decomposition. The method is tested using a master equation modeling the formation of propargyl from the reaction of singlet methylene with acetylene, proceeding through long-lived isomerizing intermediates. (C) 2003 American Institute of Physics.
Resumo:
In this paper we propose a novel fast and linearly scalable method for solving master equations arising in the context of gas-phase reactive systems, based on an existent stiff ordinary differential equation integrator. The required solution of a linear system involving the Jacobian matrix is achieved using the GMRES iteration preconditioned using the diffusion approximation to the master equation. In this way we avoid the cubic scaling of traditional master equation solution methods and maintain the low temperature robustness of numerical integration. The method is tested using a master equation modelling the formation of propargyl from the reaction of singlet methylene with acetylene, proceeding through long lived isomerizing intermediates. (C) 2003 American Institute of Physics.
Resumo:
Time motion analysis is extensively used to assess the demands of team sports. At present there is only limited information on the reliability of measurements using this analysis tool. The aim of this study was to establish the reliability of an individual observer's time motion analysis of rugby union. Ten elite level rugby players were individually tracked in Southern Hemisphere Super 12 matches using a digital video camera. The video footage was subsequently analysed by a single researcher on two occasions one month apart. The test-retest reliability was quantified as the typical error of measurement (TEM) and rated as either good (10% TEM). The total time spent in the individual movements of walking, jogging, striding, sprinting, static exertion and being stationary had moderate to poor reliability (5.8-11.1% TEM). The frequency of individual movements had good to poor reliability (4.3-13.6% TEM), while the mean duration of individual movements had moderate reliability (7.1-9.3% TEM). For the individual observer in the present investigation, time motion analysis was shown to be moderately reliable as an evaluation tool for examining the movement patterns of players in competitive rugby. These reliability values should be considered when assessing the movement patterns of rugby players within competition.
Resumo:
The interoperability of IP video equipment is a critical problem for surveillance systems and other video application developers. ONVIF is one of the two specifications addressing the standardization of networked devices interface, and it is based on SOAP. This paper addresses the development of an ONVIF library to develop clients of video cameras. We address the choice of a web services toolkit, and how to use the selected toolkit to develop a basic library. From that, we discuss the implementation of features that ...
Resumo:
The Wyner-Ziv video coding (WZVC) rate distortion performance is highly dependent on the quality of the side information, an estimation of the original frame, created at the decoder. This paper, characterizes the WZVC efficiency when motion compensated frame interpolation (MCFI) techniques are used to generate the side information, a difficult problem in WZVC especially because the decoder only has available some reference decoded frames. The proposed WZVC compression efficiency rate model relates the power spectral of the estimation error to the accuracy of the MCFI motion field. Then, some interesting conclusions may be derived related to the impact of the motion field smoothness and the correlation to the true motion trajectories on the compression performance.
Resumo:
One of the most efficient approaches to generate the side information (SI) in distributed video codecs is through motion compensated frame interpolation where the current frame is estimated based on past and future reference frames. However, this approach leads to significant spatial and temporal variations in the correlation noise between the source at the encoder and the SI at the decoder. In such scenario, it would be useful to design an architecture where the SI can be more robustly generated at the block level, avoiding the creation of SI frame regions with lower correlation, largely responsible for some coding efficiency losses. In this paper, a flexible framework to generate SI at the block level in two modes is presented: while the first mode corresponds to a motion compensated interpolation (MCI) technique, the second mode corresponds to a motion compensated quality enhancement (MCQE) technique where a low quality Intra block sent by the encoder is used to generate the SI by doing motion estimation with the help of the reference frames. The novel MCQE mode can be overall advantageous from the rate-distortion point of view, even if some rate has to be invested in the low quality Intra coding blocks, for blocks where the MCI produces SI with lower correlation. The overall solution is evaluated in terms of RD performance with improvements up to 2 dB, especially for high motion video sequences and long Group of Pictures (GOP) sizes.
Resumo:
Motion compensated frame interpolation (MCFI) is one of the most efficient solutions to generate side information (SI) in the context of distributed video coding. However, it creates SI with rather significant motion compensated errors for some frame regions while rather small for some other regions depending on the video content. In this paper, a low complexity Infra mode selection algorithm is proposed to select the most 'critical' blocks in the WZ frame and help the decoder with some reliable data for those blocks. For each block, the novel coding mode selection algorithm estimates the encoding rate for the Intra based and WZ coding modes and determines the best coding mode while maintaining a low encoder complexity. The proposed solution is evaluated in terms of rate-distortion performance with improvements up to 1.2 dB regarding a WZ coding mode only solution.